Jan 29 16:15:50.743216 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:15:50.743233 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:15:50.743239 kernel: Disabled fast string operations Jan 29 16:15:50.743243 kernel: BIOS-provided physical RAM map: Jan 29 16:15:50.743247 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 29 16:15:50.743251 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 29 16:15:50.743257 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 29 16:15:50.743261 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 29 16:15:50.743265 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 29 16:15:50.743269 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 29 16:15:50.743274 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 29 16:15:50.743278 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 29 16:15:50.743282 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 29 16:15:50.743286 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 29 16:15:50.743292 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 29 16:15:50.743297 kernel: NX (Execute Disable) protection: active Jan 29 16:15:50.743302 kernel: APIC: Static calls initialized Jan 29 16:15:50.743306 kernel: SMBIOS 2.7 present. Jan 29 16:15:50.743311 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 29 16:15:50.743316 kernel: vmware: hypercall mode: 0x00 Jan 29 16:15:50.743320 kernel: Hypervisor detected: VMware Jan 29 16:15:50.743325 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 29 16:15:50.743331 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 29 16:15:50.743336 kernel: vmware: using clock offset of 2419275177 ns Jan 29 16:15:50.743340 kernel: tsc: Detected 3408.000 MHz processor Jan 29 16:15:50.743346 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:15:50.743351 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:15:50.743356 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 29 16:15:50.743360 kernel: total RAM covered: 3072M Jan 29 16:15:50.743365 kernel: Found optimal setting for mtrr clean up Jan 29 16:15:50.743371 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 29 16:15:50.743375 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 29 16:15:50.743381 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:15:50.743386 kernel: Using GB pages for direct mapping Jan 29 16:15:50.743391 kernel: ACPI: Early table checksum verification disabled Jan 29 16:15:50.743396 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 29 16:15:50.743401 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 29 16:15:50.743405 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 29 16:15:50.743410 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 29 16:15:50.743415 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 16:15:50.743422 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 16:15:50.743428 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 29 16:15:50.743433 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 29 16:15:50.743438 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 29 16:15:50.743443 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 29 16:15:50.743448 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 29 16:15:50.743454 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 29 16:15:50.743459 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 29 16:15:50.743464 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 29 16:15:50.743469 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 16:15:50.743474 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 16:15:50.743479 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 29 16:15:50.743484 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 29 16:15:50.743489 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 29 16:15:50.743494 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 29 16:15:50.743500 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 29 16:15:50.743505 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 29 16:15:50.743510 kernel: system APIC only can use physical flat Jan 29 16:15:50.743532 kernel: APIC: Switched APIC routing to: physical flat Jan 29 16:15:50.743537 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 16:15:50.743542 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 16:15:50.743547 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 16:15:50.743552 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 16:15:50.743571 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 16:15:50.743576 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 16:15:50.743582 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 16:15:50.743587 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 16:15:50.743592 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 29 16:15:50.743597 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 29 16:15:50.743602 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 29 16:15:50.743607 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 29 16:15:50.743611 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 29 16:15:50.743616 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 29 16:15:50.743621 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 29 16:15:50.743626 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 29 16:15:50.743632 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 29 16:15:50.743637 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 29 16:15:50.743642 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 29 16:15:50.743647 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 29 16:15:50.743651 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 29 16:15:50.743656 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 29 16:15:50.743661 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 29 16:15:50.743666 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 29 16:15:50.743671 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 29 16:15:50.743676 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 29 16:15:50.743682 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 29 16:15:50.743687 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 29 16:15:50.743692 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 29 16:15:50.743697 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 29 16:15:50.743702 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 29 16:15:50.743706 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 29 16:15:50.743711 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 29 16:15:50.743716 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 29 16:15:50.743721 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 29 16:15:50.743726 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 29 16:15:50.743732 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 29 16:15:50.743737 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 29 16:15:50.743742 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 29 16:15:50.743747 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 29 16:15:50.743751 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 29 16:15:50.743756 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 29 16:15:50.743761 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 29 16:15:50.743766 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 29 16:15:50.743771 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 29 16:15:50.743776 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 29 16:15:50.743782 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 29 16:15:50.743786 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 29 16:15:50.743791 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 29 16:15:50.743796 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 29 16:15:50.743801 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 29 16:15:50.743806 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 29 16:15:50.743811 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 29 16:15:50.743815 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 29 16:15:50.743820 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 29 16:15:50.743825 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 29 16:15:50.743830 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 29 16:15:50.743836 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 29 16:15:50.743841 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 29 16:15:50.743850 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 29 16:15:50.743856 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 29 16:15:50.743861 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 29 16:15:50.743866 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 29 16:15:50.743872 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 29 16:15:50.743877 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 29 16:15:50.743883 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 29 16:15:50.743888 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 29 16:15:50.743893 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 29 16:15:50.743898 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 29 16:15:50.743904 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 29 16:15:50.743909 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 29 16:15:50.743914 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 29 16:15:50.743919 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 29 16:15:50.743924 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 29 16:15:50.743930 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 29 16:15:50.743936 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 29 16:15:50.743941 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 29 16:15:50.743946 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 29 16:15:50.743951 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 29 16:15:50.743957 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 29 16:15:50.743962 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 29 16:15:50.743967 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 29 16:15:50.743972 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 29 16:15:50.743978 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 29 16:15:50.743983 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 29 16:15:50.743988 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 29 16:15:50.743994 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 29 16:15:50.743999 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 29 16:15:50.744005 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 29 16:15:50.744010 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 29 16:15:50.744015 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 29 16:15:50.744020 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 29 16:15:50.744058 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 29 16:15:50.744066 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 29 16:15:50.744071 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 29 16:15:50.744076 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 29 16:15:50.744084 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 29 16:15:50.744089 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 29 16:15:50.744095 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 29 16:15:50.744100 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 29 16:15:50.744105 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 29 16:15:50.744110 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 29 16:15:50.744115 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 29 16:15:50.744120 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 29 16:15:50.744126 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 29 16:15:50.744131 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 29 16:15:50.744137 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 29 16:15:50.744143 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 29 16:15:50.744148 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 29 16:15:50.744153 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 29 16:15:50.744159 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 29 16:15:50.744164 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 29 16:15:50.744169 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 29 16:15:50.744174 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 29 16:15:50.744179 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 29 16:15:50.744185 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 29 16:15:50.744191 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 29 16:15:50.744196 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 29 16:15:50.744202 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 29 16:15:50.744207 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 29 16:15:50.744212 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 29 16:15:50.744217 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 29 16:15:50.744222 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 29 16:15:50.744228 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 29 16:15:50.744241 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 29 16:15:50.744247 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 29 16:15:50.744254 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 29 16:15:50.744260 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 29 16:15:50.744265 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 16:15:50.744271 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 16:15:50.744276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 29 16:15:50.744282 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 29 16:15:50.744287 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 29 16:15:50.744296 kernel: Zone ranges: Jan 29 16:15:50.744304 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:15:50.744312 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 29 16:15:50.744339 kernel: Normal empty Jan 29 16:15:50.744347 kernel: Movable zone start for each node Jan 29 16:15:50.744353 kernel: Early memory node ranges Jan 29 16:15:50.744358 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 29 16:15:50.744364 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 29 16:15:50.744369 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 29 16:15:50.744375 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 29 16:15:50.744380 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:15:50.744386 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 29 16:15:50.744393 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 29 16:15:50.744398 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 29 16:15:50.744404 kernel: system APIC only can use physical flat Jan 29 16:15:50.744409 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 29 16:15:50.744415 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 29 16:15:50.744420 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 29 16:15:50.744425 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 29 16:15:50.744431 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 29 16:15:50.744436 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 29 16:15:50.744441 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 29 16:15:50.744448 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 29 16:15:50.744453 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 29 16:15:50.744459 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 29 16:15:50.744464 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 29 16:15:50.744470 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 29 16:15:50.744475 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 29 16:15:50.744481 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 29 16:15:50.744486 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 29 16:15:50.744492 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 29 16:15:50.744497 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 29 16:15:50.744503 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 29 16:15:50.744509 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 29 16:15:50.744514 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 29 16:15:50.744520 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 29 16:15:50.744525 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 29 16:15:50.744530 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 29 16:15:50.744536 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 29 16:15:50.744541 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 29 16:15:50.744546 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 29 16:15:50.744553 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 29 16:15:50.744558 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 29 16:15:50.744564 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 29 16:15:50.744569 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 29 16:15:50.744575 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 29 16:15:50.744580 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 29 16:15:50.744585 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 29 16:15:50.744591 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 29 16:15:50.744596 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 29 16:15:50.744602 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 29 16:15:50.744608 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 29 16:15:50.744613 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 29 16:15:50.744619 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 29 16:15:50.744624 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 29 16:15:50.744629 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 29 16:15:50.744635 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 29 16:15:50.744640 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 29 16:15:50.744646 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 29 16:15:50.744651 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 29 16:15:50.744656 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 29 16:15:50.744663 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 29 16:15:50.744668 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 29 16:15:50.744673 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 29 16:15:50.744679 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 29 16:15:50.744684 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 29 16:15:50.744690 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 29 16:15:50.744695 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 29 16:15:50.744700 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 29 16:15:50.744706 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 29 16:15:50.744711 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 29 16:15:50.744718 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 29 16:15:50.744723 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 29 16:15:50.744728 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 29 16:15:50.744734 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 29 16:15:50.744739 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 29 16:15:50.744744 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 29 16:15:50.744750 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 29 16:15:50.744755 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 29 16:15:50.744760 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 29 16:15:50.744767 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 29 16:15:50.744772 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 29 16:15:50.744778 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 29 16:15:50.744783 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 29 16:15:50.744789 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 29 16:15:50.744794 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 29 16:15:50.744799 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 29 16:15:50.744804 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 29 16:15:50.744810 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 29 16:15:50.744815 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 29 16:15:50.744822 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 29 16:15:50.744827 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 29 16:15:50.744832 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 29 16:15:50.744838 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 29 16:15:50.744843 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 29 16:15:50.744849 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 29 16:15:50.744854 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 29 16:15:50.744859 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 29 16:15:50.744864 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 29 16:15:50.744871 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 29 16:15:50.744876 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 29 16:15:50.744882 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 29 16:15:50.744887 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 29 16:15:50.744892 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 29 16:15:50.744898 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 29 16:15:50.744903 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 29 16:15:50.744909 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 29 16:15:50.744914 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 29 16:15:50.744919 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 29 16:15:50.744926 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 29 16:15:50.744931 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 29 16:15:50.744936 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 29 16:15:50.744942 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 29 16:15:50.744947 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 29 16:15:50.744953 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 29 16:15:50.744958 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 29 16:15:50.744963 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 29 16:15:50.744969 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 29 16:15:50.744974 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 29 16:15:50.744980 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 29 16:15:50.744986 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 29 16:15:50.744991 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 29 16:15:50.744997 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 29 16:15:50.745002 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 29 16:15:50.745007 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 29 16:15:50.745013 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 29 16:15:50.745018 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 29 16:15:50.745023 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 29 16:15:50.745201 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 29 16:15:50.745209 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 29 16:15:50.745215 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 29 16:15:50.745221 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 29 16:15:50.745226 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 29 16:15:50.745231 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 29 16:15:50.745236 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 29 16:15:50.745242 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 29 16:15:50.745247 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 29 16:15:50.745252 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 29 16:15:50.745259 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 29 16:15:50.745265 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 29 16:15:50.745270 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 29 16:15:50.745275 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 29 16:15:50.745281 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 29 16:15:50.745286 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:15:50.745292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 29 16:15:50.745297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:15:50.745303 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 29 16:15:50.745308 kernel: TSC deadline timer available Jan 29 16:15:50.745315 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 29 16:15:50.745320 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 29 16:15:50.745326 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 29 16:15:50.745331 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:15:50.745337 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 29 16:15:50.745343 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 16:15:50.745349 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 16:15:50.745354 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 29 16:15:50.745359 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 29 16:15:50.745366 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 29 16:15:50.745371 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 29 16:15:50.745377 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 29 16:15:50.745389 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 29 16:15:50.745396 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 29 16:15:50.745402 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 29 16:15:50.745407 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 29 16:15:50.745413 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 29 16:15:50.745420 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 29 16:15:50.745426 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 29 16:15:50.745431 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 29 16:15:50.745437 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 29 16:15:50.745443 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 29 16:15:50.745448 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 29 16:15:50.745455 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:15:50.745461 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:15:50.745468 kernel: random: crng init done Jan 29 16:15:50.745473 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 29 16:15:50.745479 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 29 16:15:50.745485 kernel: printk: log_buf_len min size: 262144 bytes Jan 29 16:15:50.745490 kernel: printk: log_buf_len: 1048576 bytes Jan 29 16:15:50.745496 kernel: printk: early log buf free: 239648(91%) Jan 29 16:15:50.745502 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:15:50.745508 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:15:50.745514 kernel: Fallback order for Node 0: 0 Jan 29 16:15:50.745521 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 29 16:15:50.745527 kernel: Policy zone: DMA32 Jan 29 16:15:50.745534 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:15:50.745541 kernel: Memory: 1934264K/2096628K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 162104K reserved, 0K cma-reserved) Jan 29 16:15:50.745548 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 29 16:15:50.745554 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:15:50.745560 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:15:50.745566 kernel: Dynamic Preempt: voluntary Jan 29 16:15:50.745572 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:15:50.745578 kernel: rcu: RCU event tracing is enabled. Jan 29 16:15:50.745584 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 29 16:15:50.745590 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:15:50.745596 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:15:50.745602 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:15:50.745607 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:15:50.745614 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 29 16:15:50.745620 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 29 16:15:50.745626 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 29 16:15:50.745632 kernel: Console: colour VGA+ 80x25 Jan 29 16:15:50.745638 kernel: printk: console [tty0] enabled Jan 29 16:15:50.745643 kernel: printk: console [ttyS0] enabled Jan 29 16:15:50.745649 kernel: ACPI: Core revision 20230628 Jan 29 16:15:50.745655 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 29 16:15:50.745661 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:15:50.745667 kernel: x2apic enabled Jan 29 16:15:50.745674 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:15:50.745680 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:15:50.745686 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 16:15:50.745692 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 29 16:15:50.745697 kernel: Disabled fast string operations Jan 29 16:15:50.745703 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 16:15:50.745709 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 16:15:50.745715 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:15:50.745721 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 16:15:50.745728 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 16:15:50.745733 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 16:15:50.745739 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:15:50.745745 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 16:15:50.745751 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 16:15:50.745757 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:15:50.745763 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:15:50.745768 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:15:50.745775 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 16:15:50.745781 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 16:15:50.745787 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:15:50.745793 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:15:50.745798 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:15:50.745804 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:15:50.745810 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:15:50.745816 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:15:50.745822 kernel: pid_max: default: 131072 minimum: 1024 Jan 29 16:15:50.745829 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:15:50.745834 kernel: landlock: Up and running. Jan 29 16:15:50.745840 kernel: SELinux: Initializing. Jan 29 16:15:50.745846 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:15:50.745852 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:15:50.745858 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 29 16:15:50.745864 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 16:15:50.745870 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 16:15:50.745876 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 16:15:50.745883 kernel: Performance Events: Skylake events, core PMU driver. Jan 29 16:15:50.745889 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 29 16:15:50.745895 kernel: core: CPUID marked event: 'instructions' unavailable Jan 29 16:15:50.745901 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 29 16:15:50.745906 kernel: core: CPUID marked event: 'cache references' unavailable Jan 29 16:15:50.745912 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 29 16:15:50.745917 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 29 16:15:50.745923 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 29 16:15:50.745929 kernel: ... version: 1 Jan 29 16:15:50.745937 kernel: ... bit width: 48 Jan 29 16:15:50.745942 kernel: ... generic registers: 4 Jan 29 16:15:50.745948 kernel: ... value mask: 0000ffffffffffff Jan 29 16:15:50.745954 kernel: ... max period: 000000007fffffff Jan 29 16:15:50.745960 kernel: ... fixed-purpose events: 0 Jan 29 16:15:50.745966 kernel: ... event mask: 000000000000000f Jan 29 16:15:50.745971 kernel: signal: max sigframe size: 1776 Jan 29 16:15:50.745977 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:15:50.745983 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:15:50.745990 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 16:15:50.745996 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:15:50.746002 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:15:50.746007 kernel: .... node #0, CPUs: #1 Jan 29 16:15:50.746013 kernel: Disabled fast string operations Jan 29 16:15:50.746019 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 29 16:15:50.746032 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 16:15:50.746038 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:15:50.746044 kernel: smpboot: Max logical packages: 128 Jan 29 16:15:50.746050 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 29 16:15:50.746058 kernel: devtmpfs: initialized Jan 29 16:15:50.746063 kernel: x86/mm: Memory block size: 128MB Jan 29 16:15:50.746069 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 29 16:15:50.746075 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:15:50.746081 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 29 16:15:50.746087 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:15:50.746093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:15:50.746099 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:15:50.746105 kernel: audit: type=2000 audit(1738167349.067:1): state=initialized audit_enabled=0 res=1 Jan 29 16:15:50.746112 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:15:50.746118 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:15:50.746124 kernel: cpuidle: using governor menu Jan 29 16:15:50.746129 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 29 16:15:50.746135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:15:50.746141 kernel: dca service started, version 1.12.1 Jan 29 16:15:50.746147 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 29 16:15:50.746153 kernel: PCI: Using configuration type 1 for base access Jan 29 16:15:50.746159 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:15:50.746166 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:15:50.746172 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:15:50.746177 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:15:50.746183 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:15:50.746189 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:15:50.746195 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:15:50.746201 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:15:50.746207 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:15:50.746213 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:15:50.746220 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 29 16:15:50.746226 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:15:50.746232 kernel: ACPI: Interpreter enabled Jan 29 16:15:50.746238 kernel: ACPI: PM: (supports S0 S1 S5) Jan 29 16:15:50.746243 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:15:50.746249 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:15:50.746255 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:15:50.746261 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 29 16:15:50.746268 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 29 16:15:50.746348 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:15:50.746402 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 29 16:15:50.746453 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 29 16:15:50.746462 kernel: PCI host bridge to bus 0000:00 Jan 29 16:15:50.746516 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:15:50.746562 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 29 16:15:50.746610 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:15:50.746654 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:15:50.746698 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 29 16:15:50.746742 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 29 16:15:50.746801 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 29 16:15:50.746875 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 29 16:15:50.746937 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 29 16:15:50.746992 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 29 16:15:50.747085 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 29 16:15:50.747139 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 16:15:50.747190 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 16:15:50.747248 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 16:15:50.747301 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 16:15:50.747377 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 29 16:15:50.747442 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 29 16:15:50.747493 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 29 16:15:50.747548 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 29 16:15:50.747600 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 29 16:15:50.747649 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 29 16:15:50.747706 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 29 16:15:50.747756 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 29 16:15:50.747806 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 29 16:15:50.747856 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 29 16:15:50.747905 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 29 16:15:50.747956 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:15:50.748010 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 29 16:15:50.748089 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.748141 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.748196 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.748248 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.748304 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.748355 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.748413 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.748466 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.748521 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.748572 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.748626 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.748678 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.748735 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.748787 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.748841 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.748894 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.748949 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.749001 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.749075 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.749128 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.749183 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.749233 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.749290 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.749342 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.749401 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.749452 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.749508 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.749558 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.749612 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.749664 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.749722 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.749775 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.749830 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.749881 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.750078 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.750134 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.750193 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.750249 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.750308 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.750360 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.750447 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.750499 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.750553 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.750606 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.750661 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.750713 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.750767 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.750818 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.750872 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.750926 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.750979 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.751056 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.751116 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.751167 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.751225 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.751280 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.751334 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.751385 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.751439 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.751491 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.751547 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.751602 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.751657 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 29 16:15:50.751708 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.751760 kernel: pci_bus 0000:01: extended config space not accessible Jan 29 16:15:50.751812 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 16:15:50.751864 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 16:15:50.751873 kernel: acpiphp: Slot [32] registered Jan 29 16:15:50.751881 kernel: acpiphp: Slot [33] registered Jan 29 16:15:50.751887 kernel: acpiphp: Slot [34] registered Jan 29 16:15:50.751893 kernel: acpiphp: Slot [35] registered Jan 29 16:15:50.751898 kernel: acpiphp: Slot [36] registered Jan 29 16:15:50.751904 kernel: acpiphp: Slot [37] registered Jan 29 16:15:50.751909 kernel: acpiphp: Slot [38] registered Jan 29 16:15:50.751915 kernel: acpiphp: Slot [39] registered Jan 29 16:15:50.751921 kernel: acpiphp: Slot [40] registered Jan 29 16:15:50.751927 kernel: acpiphp: Slot [41] registered Jan 29 16:15:50.751934 kernel: acpiphp: Slot [42] registered Jan 29 16:15:50.751939 kernel: acpiphp: Slot [43] registered Jan 29 16:15:50.751945 kernel: acpiphp: Slot [44] registered Jan 29 16:15:50.751951 kernel: acpiphp: Slot [45] registered Jan 29 16:15:50.751957 kernel: acpiphp: Slot [46] registered Jan 29 16:15:50.751962 kernel: acpiphp: Slot [47] registered Jan 29 16:15:50.751968 kernel: acpiphp: Slot [48] registered Jan 29 16:15:50.751974 kernel: acpiphp: Slot [49] registered Jan 29 16:15:50.751979 kernel: acpiphp: Slot [50] registered Jan 29 16:15:50.751985 kernel: acpiphp: Slot [51] registered Jan 29 16:15:50.751992 kernel: acpiphp: Slot [52] registered Jan 29 16:15:50.751997 kernel: acpiphp: Slot [53] registered Jan 29 16:15:50.752003 kernel: acpiphp: Slot [54] registered Jan 29 16:15:50.752008 kernel: acpiphp: Slot [55] registered Jan 29 16:15:50.752014 kernel: acpiphp: Slot [56] registered Jan 29 16:15:50.752020 kernel: acpiphp: Slot [57] registered Jan 29 16:15:50.752076 kernel: acpiphp: Slot [58] registered Jan 29 16:15:50.752083 kernel: acpiphp: Slot [59] registered Jan 29 16:15:50.752088 kernel: acpiphp: Slot [60] registered Jan 29 16:15:50.752096 kernel: acpiphp: Slot [61] registered Jan 29 16:15:50.752102 kernel: acpiphp: Slot [62] registered Jan 29 16:15:50.752108 kernel: acpiphp: Slot [63] registered Jan 29 16:15:50.752165 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 29 16:15:50.752217 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 16:15:50.752267 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 16:15:50.752316 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 16:15:50.752365 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 29 16:15:50.752419 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 29 16:15:50.752468 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 29 16:15:50.752518 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 29 16:15:50.752567 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 29 16:15:50.752624 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 29 16:15:50.752678 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 29 16:15:50.752730 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 29 16:15:50.752784 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 16:15:50.752836 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 29 16:15:50.752920 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 16:15:50.753000 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 16:15:50.753073 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 16:15:50.753126 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 16:15:50.753179 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 16:15:50.753232 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 16:15:50.753288 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 16:15:50.753338 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 16:15:50.753391 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 16:15:50.753477 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 16:15:50.753527 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 16:15:50.753578 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 16:15:50.753629 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 16:15:50.753684 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 16:15:50.753791 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 16:15:50.753881 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 16:15:50.753941 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 16:15:50.753991 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 16:15:50.754162 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 16:15:50.754216 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 16:15:50.754288 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 16:15:50.754341 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 16:15:50.754392 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 16:15:50.754442 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 16:15:50.754494 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 16:15:50.754545 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 16:15:50.754599 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 16:15:50.754656 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 29 16:15:50.754709 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 29 16:15:50.754761 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 29 16:15:50.754812 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 29 16:15:50.754864 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 29 16:15:50.754915 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 16:15:50.754970 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 29 16:15:50.755022 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 16:15:50.755082 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 16:15:50.755133 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 16:15:50.755184 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 16:15:50.755235 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 16:15:50.755287 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 16:15:50.755339 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 16:15:50.755392 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 16:15:50.755444 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 16:15:50.755496 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 16:15:50.755547 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 16:15:50.755597 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 16:15:50.755649 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 16:15:50.755702 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 16:15:50.755756 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 16:15:50.755807 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 16:15:50.755859 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 16:15:50.755910 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 16:15:50.755960 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 16:15:50.756011 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 16:15:50.756070 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 16:15:50.756121 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 16:15:50.756176 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 16:15:50.756227 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 16:15:50.756319 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 16:15:50.756370 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 16:15:50.756421 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 16:15:50.756471 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 16:15:50.756522 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 16:15:50.756573 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 16:15:50.756623 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 16:15:50.756676 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 16:15:50.756727 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 16:15:50.756778 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 16:15:50.756829 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 16:15:50.756879 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 16:15:50.756941 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 16:15:50.757010 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 16:15:50.757086 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 16:15:50.757139 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 16:15:50.757191 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 16:15:50.757248 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 16:15:50.757300 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 16:15:50.757351 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 16:15:50.757402 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 16:15:50.757453 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 16:15:50.757525 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 16:15:50.757587 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 16:15:50.757640 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 16:15:50.757693 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 16:15:50.757743 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 16:15:50.757794 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 16:15:50.757845 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 16:15:50.757896 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 16:15:50.757950 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 16:15:50.758002 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 16:15:50.758088 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 16:15:50.758141 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 16:15:50.758191 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 16:15:50.758243 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 16:15:50.758294 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 16:15:50.758344 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 16:15:50.758397 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 16:15:50.758449 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 16:15:50.758500 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 16:15:50.758551 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 16:15:50.758603 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 16:15:50.758655 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 16:15:50.758723 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 16:15:50.758793 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 16:15:50.758844 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 16:15:50.758895 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 16:15:50.758963 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 16:15:50.759015 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 16:15:50.759614 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 16:15:50.759676 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 16:15:50.759731 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 16:15:50.759787 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 16:15:50.759842 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 16:15:50.759894 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 16:15:50.759946 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 16:15:50.759956 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 29 16:15:50.759962 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 29 16:15:50.759968 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 29 16:15:50.759974 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:15:50.759980 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 29 16:15:50.759988 kernel: iommu: Default domain type: Translated Jan 29 16:15:50.759994 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:15:50.760000 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:15:50.760006 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:15:50.760012 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 29 16:15:50.760018 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 29 16:15:50.760188 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 29 16:15:50.760247 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 29 16:15:50.760299 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:15:50.760311 kernel: vgaarb: loaded Jan 29 16:15:50.760318 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 29 16:15:50.760324 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 29 16:15:50.760330 kernel: clocksource: Switched to clocksource tsc-early Jan 29 16:15:50.760336 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:15:50.760342 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:15:50.760348 kernel: pnp: PnP ACPI init Jan 29 16:15:50.760403 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 29 16:15:50.760454 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 29 16:15:50.760502 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 29 16:15:50.760551 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 29 16:15:50.760601 kernel: pnp 00:06: [dma 2] Jan 29 16:15:50.760654 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 29 16:15:50.760702 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 29 16:15:50.760751 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 29 16:15:50.760760 kernel: pnp: PnP ACPI: found 8 devices Jan 29 16:15:50.760767 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:15:50.760773 kernel: NET: Registered PF_INET protocol family Jan 29 16:15:50.760779 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:15:50.760785 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:15:50.760791 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:15:50.760797 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:15:50.760803 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:15:50.760810 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:15:50.760816 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:15:50.760822 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:15:50.760828 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:15:50.760835 kernel: NET: Registered PF_XDP protocol family Jan 29 16:15:50.760889 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 29 16:15:50.760943 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 16:15:50.760997 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 16:15:50.761072 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 16:15:50.761128 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 16:15:50.761181 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 29 16:15:50.761235 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 29 16:15:50.761288 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 29 16:15:50.761343 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 29 16:15:50.761399 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 29 16:15:50.761451 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 29 16:15:50.761504 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 29 16:15:50.761556 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 29 16:15:50.761608 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 29 16:15:50.761665 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 29 16:15:50.761719 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 29 16:15:50.761771 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 29 16:15:50.761823 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 29 16:15:50.761875 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 29 16:15:50.761929 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 29 16:15:50.761984 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 29 16:15:50.762103 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 29 16:15:50.762156 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 29 16:15:50.762207 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 16:15:50.762259 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 16:15:50.762312 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.762363 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.762418 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.762469 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.762521 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.762583 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.762637 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.762688 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.762739 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.762789 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.762845 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.762896 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.762946 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.762998 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.763071 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.763125 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.763177 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.763229 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.763285 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.763337 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.763389 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.763440 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.763493 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.763545 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.763596 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.763647 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.763702 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.763753 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.763807 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.763858 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.763909 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.763961 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.764012 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.766758 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.766831 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.766886 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.766941 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.766995 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.767073 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.767129 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.767184 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.767237 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.767293 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.767346 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.767397 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.767449 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.767502 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.767553 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.767604 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.767656 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.767708 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.767782 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.767856 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.767926 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.768007 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.768115 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.768184 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.768255 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.768322 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.768400 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.768475 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.768533 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.768588 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.768641 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.768695 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.768748 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.768802 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.768854 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.768908 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.768960 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.769013 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.769523 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.769582 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.769636 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.769690 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.769743 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.769797 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.769850 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.769903 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.769955 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.770014 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.770090 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.770147 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 16:15:50.770199 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 16:15:50.770253 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 16:15:50.770307 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 29 16:15:50.770359 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 16:15:50.770410 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 16:15:50.770462 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 16:15:50.770525 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 29 16:15:50.770598 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 16:15:50.770652 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 16:15:50.770704 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 16:15:50.770756 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 16:15:50.770810 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 16:15:50.770861 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 16:15:50.770912 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 16:15:50.770967 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 16:15:50.771020 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 16:15:50.771146 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 16:15:50.771197 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 16:15:50.771249 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 16:15:50.771301 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 16:15:50.771352 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 16:15:50.771403 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 16:15:50.771454 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 16:15:50.771505 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 16:15:50.771560 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 16:15:50.771614 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 16:15:50.771665 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 16:15:50.771717 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 16:15:50.771782 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 16:15:50.771834 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 16:15:50.771888 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 16:15:50.771940 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 16:15:50.771991 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 16:15:50.772056 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 16:15:50.772115 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 29 16:15:50.772169 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 16:15:50.772221 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 16:15:50.772273 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 16:15:50.772326 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 16:15:50.772383 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 16:15:50.772435 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 16:15:50.772488 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 16:15:50.772541 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 16:15:50.772595 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 16:15:50.772648 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 16:15:50.772700 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 16:15:50.772753 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 16:15:50.772804 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 16:15:50.772859 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 16:15:50.772912 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 16:15:50.772965 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 16:15:50.773018 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 16:15:50.773084 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 16:15:50.773138 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 16:15:50.773232 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 16:15:50.773496 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 16:15:50.773580 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 16:15:50.773670 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 16:15:50.773729 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 16:15:50.773783 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 16:15:50.773836 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 16:15:50.773889 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 16:15:50.773942 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 16:15:50.773994 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 16:15:50.774055 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 16:15:50.774109 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 16:15:50.774162 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 16:15:50.774216 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 16:15:50.774291 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 16:15:50.774345 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 16:15:50.774399 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 16:15:50.774452 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 16:15:50.774504 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 16:15:50.774556 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 16:15:50.774609 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 16:15:50.774662 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 16:15:50.774714 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 16:15:50.774771 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 16:15:50.774823 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 16:15:50.774875 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 16:15:50.774930 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 16:15:50.774982 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 16:15:50.775041 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 16:15:50.775096 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 16:15:50.775149 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 16:15:50.775201 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 16:15:50.775278 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 16:15:50.775333 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 16:15:50.775385 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 16:15:50.775439 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 16:15:50.775492 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 16:15:50.775543 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 16:15:50.775594 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 16:15:50.775647 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 16:15:50.775701 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 16:15:50.775752 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 16:15:50.775807 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 16:15:50.775862 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 16:15:50.775914 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 16:15:50.775966 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 16:15:50.776019 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 16:15:50.776094 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 16:15:50.776146 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 16:15:50.776201 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 16:15:50.776253 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 16:15:50.776309 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 16:15:50.776362 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 16:15:50.776441 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 16:15:50.776508 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 16:15:50.776564 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 16:15:50.776619 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 16:15:50.776670 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 16:15:50.776724 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 16:15:50.776776 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 16:15:50.776828 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 16:15:50.776884 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 16:15:50.776932 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 16:15:50.776977 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 16:15:50.777484 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 29 16:15:50.777540 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 29 16:15:50.777594 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 29 16:15:50.777643 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 29 16:15:50.779207 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 16:15:50.779258 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 16:15:50.779306 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 16:15:50.779353 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 16:15:50.779400 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 29 16:15:50.779447 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 29 16:15:50.779500 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 29 16:15:50.779548 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 29 16:15:50.779599 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 16:15:50.779651 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 29 16:15:50.779700 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 29 16:15:50.779747 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 16:15:50.779803 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 29 16:15:50.779851 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 29 16:15:50.779901 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 16:15:50.779952 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 29 16:15:50.780000 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 16:15:50.780313 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 29 16:15:50.780366 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 16:15:50.780420 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 29 16:15:50.780468 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 16:15:50.780524 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 29 16:15:50.780574 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 16:15:50.780629 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 29 16:15:50.780686 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 16:15:50.780742 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 29 16:15:50.780794 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 29 16:15:50.780841 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 16:15:50.780893 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 29 16:15:50.780942 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 29 16:15:50.780989 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 16:15:50.781101 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 29 16:15:50.781153 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 29 16:15:50.781203 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 16:15:50.781258 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 29 16:15:50.781306 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 16:15:50.781358 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 29 16:15:50.781407 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 16:15:50.781459 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 29 16:15:50.781510 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 16:15:50.781562 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 29 16:15:50.781611 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 16:15:50.781663 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 29 16:15:50.781711 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 16:15:50.781763 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 29 16:15:50.781811 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 29 16:15:50.781861 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 16:15:50.781913 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 29 16:15:50.781962 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 29 16:15:50.782009 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 16:15:50.783637 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 29 16:15:50.783695 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 29 16:15:50.783751 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 16:15:50.783805 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 29 16:15:50.783853 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 16:15:50.783905 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 29 16:15:50.783954 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 16:15:50.784006 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 29 16:15:50.784071 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 16:15:50.784129 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 29 16:15:50.784185 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 16:15:50.784247 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 29 16:15:50.784296 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 16:15:50.784351 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 29 16:15:50.784402 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 29 16:15:50.784450 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 16:15:50.784503 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 29 16:15:50.784552 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 29 16:15:50.784600 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 16:15:50.784652 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 29 16:15:50.784701 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 16:15:50.784760 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 29 16:15:50.784809 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 16:15:50.784862 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 29 16:15:50.784910 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 16:15:50.784962 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 29 16:15:50.785011 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 16:15:50.786246 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 29 16:15:50.786298 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 16:15:50.786351 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 29 16:15:50.786401 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 16:15:50.786460 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 16:15:50.786470 kernel: PCI: CLS 32 bytes, default 64 Jan 29 16:15:50.786477 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 16:15:50.786487 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 16:15:50.786493 kernel: clocksource: Switched to clocksource tsc Jan 29 16:15:50.786500 kernel: Initialise system trusted keyrings Jan 29 16:15:50.786506 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:15:50.786512 kernel: Key type asymmetric registered Jan 29 16:15:50.786518 kernel: Asymmetric key parser 'x509' registered Jan 29 16:15:50.786525 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:15:50.786531 kernel: io scheduler mq-deadline registered Jan 29 16:15:50.786537 kernel: io scheduler kyber registered Jan 29 16:15:50.786545 kernel: io scheduler bfq registered Jan 29 16:15:50.786602 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 29 16:15:50.786664 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.786720 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 29 16:15:50.786773 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.786829 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 29 16:15:50.786883 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.786937 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 29 16:15:50.786994 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.787714 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 29 16:15:50.787778 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.787834 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 29 16:15:50.787889 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.787949 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 29 16:15:50.788003 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.788214 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 29 16:15:50.788277 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.788331 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 29 16:15:50.788388 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.788441 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 29 16:15:50.788494 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.788547 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 29 16:15:50.788599 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.788652 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 29 16:15:50.788706 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.788762 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 29 16:15:50.788815 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.788869 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 29 16:15:50.788922 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.788974 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 29 16:15:50.789039 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.789098 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 29 16:15:50.789151 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.789203 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 29 16:15:50.789256 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.789310 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 29 16:15:50.789362 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.789420 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 29 16:15:50.789473 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.789526 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 29 16:15:50.789578 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.789633 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 29 16:15:50.789686 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.789743 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 29 16:15:50.789796 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.789850 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 29 16:15:50.789902 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.789955 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 29 16:15:50.790012 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.790073 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 29 16:15:50.790126 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.790178 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 29 16:15:50.790231 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.790283 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 29 16:15:50.790339 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.790394 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 29 16:15:50.790446 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.790500 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 29 16:15:50.790553 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.790607 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 29 16:15:50.790662 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.790716 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 29 16:15:50.790768 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.790824 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 29 16:15:50.790877 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 16:15:50.790889 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:15:50.790896 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:15:50.790902 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:15:50.790909 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 29 16:15:50.790915 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:15:50.790922 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:15:50.790975 kernel: rtc_cmos 00:01: registered as rtc0 Jan 29 16:15:50.791068 kernel: rtc_cmos 00:01: setting system clock to 2025-01-29T16:15:50 UTC (1738167350) Jan 29 16:15:50.791124 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 29 16:15:50.791134 kernel: intel_pstate: CPU model not supported Jan 29 16:15:50.791140 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:15:50.791147 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:15:50.791153 kernel: Segment Routing with IPv6 Jan 29 16:15:50.791159 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:15:50.791166 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:15:50.791172 kernel: Key type dns_resolver registered Jan 29 16:15:50.791179 kernel: IPI shorthand broadcast: enabled Jan 29 16:15:50.791187 kernel: sched_clock: Marking stable (872003822, 220269296)->(1147868184, -55595066) Jan 29 16:15:50.791193 kernel: registered taskstats version 1 Jan 29 16:15:50.791199 kernel: Loading compiled-in X.509 certificates Jan 29 16:15:50.791206 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:15:50.791212 kernel: Key type .fscrypt registered Jan 29 16:15:50.791218 kernel: Key type fscrypt-provisioning registered Jan 29 16:15:50.791224 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:15:50.791237 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:15:50.791246 kernel: ima: No architecture policies found Jan 29 16:15:50.791253 kernel: clk: Disabling unused clocks Jan 29 16:15:50.791259 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:15:50.791265 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:15:50.791272 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:15:50.791278 kernel: Run /init as init process Jan 29 16:15:50.791284 kernel: with arguments: Jan 29 16:15:50.791290 kernel: /init Jan 29 16:15:50.791297 kernel: with environment: Jan 29 16:15:50.791304 kernel: HOME=/ Jan 29 16:15:50.791310 kernel: TERM=linux Jan 29 16:15:50.791316 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:15:50.791323 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:15:50.791332 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:15:50.791339 systemd[1]: Detected virtualization vmware. Jan 29 16:15:50.791346 systemd[1]: Detected architecture x86-64. Jan 29 16:15:50.791352 systemd[1]: Running in initrd. Jan 29 16:15:50.791359 systemd[1]: No hostname configured, using default hostname. Jan 29 16:15:50.791366 systemd[1]: Hostname set to . Jan 29 16:15:50.791373 systemd[1]: Initializing machine ID from random generator. Jan 29 16:15:50.791379 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:15:50.791386 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:15:50.791393 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:15:50.791401 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:15:50.791407 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:15:50.791416 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:15:50.791423 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:15:50.791431 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:15:50.791437 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:15:50.791444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:15:50.791450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:15:50.791456 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:15:50.791464 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:15:50.791471 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:15:50.791477 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:15:50.791484 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:15:50.791490 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:15:50.791497 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:15:50.791503 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:15:50.791510 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:15:50.791516 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:15:50.791524 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:15:50.791531 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:15:50.791537 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:15:50.791544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:15:50.791550 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:15:50.791557 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:15:50.791563 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:15:50.791570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:15:50.791577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:15:50.791584 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:15:50.791590 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:15:50.791598 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:15:50.791604 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:15:50.791629 systemd-journald[217]: Collecting audit messages is disabled. Jan 29 16:15:50.791647 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:15:50.791654 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:15:50.791662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:15:50.791669 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:15:50.791676 kernel: Bridge firewalling registered Jan 29 16:15:50.791682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:15:50.791689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:15:50.791695 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:15:50.791703 systemd-journald[217]: Journal started Jan 29 16:15:50.791719 systemd-journald[217]: Runtime Journal (/run/log/journal/08a923fbc70a4b43903262f51abb0b7b) is 4.8M, max 38.6M, 33.8M free. Jan 29 16:15:50.794272 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:15:50.756217 systemd-modules-load[218]: Inserted module 'overlay' Jan 29 16:15:50.777316 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 29 16:15:50.796054 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:15:50.796400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:15:50.797700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:15:50.799071 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:15:50.800183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:15:50.809065 dracut-cmdline[247]: dracut-dracut-053 Jan 29 16:15:50.808327 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:15:50.811831 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:15:50.813230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:15:50.831439 systemd-resolved[260]: Positive Trust Anchors: Jan 29 16:15:50.831692 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:15:50.831844 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:15:50.834189 systemd-resolved[260]: Defaulting to hostname 'linux'. Jan 29 16:15:50.834956 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:15:50.835250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:15:50.851049 kernel: SCSI subsystem initialized Jan 29 16:15:50.857041 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:15:50.864046 kernel: iscsi: registered transport (tcp) Jan 29 16:15:50.876212 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:15:50.876290 kernel: QLogic iSCSI HBA Driver Jan 29 16:15:50.896505 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:15:50.900143 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:15:50.914089 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:15:50.914119 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:15:50.915226 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:15:50.948049 kernel: raid6: avx2x4 gen() 46883 MB/s Jan 29 16:15:50.963064 kernel: raid6: avx2x2 gen() 53479 MB/s Jan 29 16:15:50.980205 kernel: raid6: avx2x1 gen() 44750 MB/s Jan 29 16:15:50.980223 kernel: raid6: using algorithm avx2x2 gen() 53479 MB/s Jan 29 16:15:50.998206 kernel: raid6: .... xor() 31957 MB/s, rmw enabled Jan 29 16:15:50.998235 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:15:51.011037 kernel: xor: automatically using best checksumming function avx Jan 29 16:15:51.100043 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:15:51.105178 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:15:51.109121 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:15:51.117162 systemd-udevd[434]: Using default interface naming scheme 'v255'. Jan 29 16:15:51.119959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:15:51.125118 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:15:51.131813 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Jan 29 16:15:51.146296 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:15:51.150230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:15:51.222278 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:15:51.227138 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:15:51.236157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:15:51.237276 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:15:51.238166 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:15:51.238558 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:15:51.245127 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:15:51.253476 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:15:51.302747 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 29 16:15:51.307447 kernel: vmw_pvscsi: using 64bit dma Jan 29 16:15:51.307472 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 29 16:15:51.309838 kernel: vmw_pvscsi: max_id: 16 Jan 29 16:15:51.311039 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 29 16:15:51.316043 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 29 16:15:51.330764 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 29 16:15:51.330775 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 29 16:15:51.330783 kernel: vmw_pvscsi: using MSI-X Jan 29 16:15:51.330790 kernel: libata version 3.00 loaded. Jan 29 16:15:51.330797 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 29 16:15:51.330880 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:15:51.330892 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 29 16:15:51.346144 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 29 16:15:51.346231 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 29 16:15:51.346336 kernel: scsi host1: ata_piix Jan 29 16:15:51.346445 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 29 16:15:51.346555 kernel: scsi host2: ata_piix Jan 29 16:15:51.346658 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 29 16:15:51.346673 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 29 16:15:51.346681 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 29 16:15:51.346768 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:15:51.346777 kernel: AES CTR mode by8 optimization enabled Jan 29 16:15:51.340567 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:15:51.340636 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:15:51.344043 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:15:51.344160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:15:51.344257 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:15:51.344442 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:15:51.351599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:15:51.362555 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:15:51.366162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:15:51.378130 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:15:51.507046 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 29 16:15:51.511067 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 29 16:15:51.529094 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 29 16:15:51.536747 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 16:15:51.536821 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 29 16:15:51.536883 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 29 16:15:51.536942 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 29 16:15:51.537001 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 29 16:15:51.544002 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:15:51.544012 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:15:51.544022 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 16:15:51.544098 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:15:51.567055 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (495) Jan 29 16:15:51.578060 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (484) Jan 29 16:15:51.581771 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 29 16:15:51.587085 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 29 16:15:51.592503 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 29 16:15:51.596872 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 29 16:15:51.597108 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 29 16:15:51.601101 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:15:51.625043 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:15:52.653933 disk-uuid[595]: The operation has completed successfully. Jan 29 16:15:52.654297 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:15:52.691579 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:15:52.691801 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:15:52.711240 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:15:52.712910 sh[611]: Success Jan 29 16:15:52.721047 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 16:15:52.755439 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:15:52.757090 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:15:52.757363 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:15:52.771341 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:15:52.771367 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:15:52.771376 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:15:52.773245 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:15:52.773260 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:15:52.817041 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:15:52.820190 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:15:52.831194 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 29 16:15:52.832558 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:15:52.850700 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:15:52.850738 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:15:52.850753 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:15:52.855041 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:15:52.861043 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:15:52.862323 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:15:52.867141 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:15:52.873543 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:15:52.904617 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 16:15:52.909169 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:15:52.957231 ignition[671]: Ignition 2.20.0 Jan 29 16:15:52.957239 ignition[671]: Stage: fetch-offline Jan 29 16:15:52.957266 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:15:52.957273 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 16:15:52.957323 ignition[671]: parsed url from cmdline: "" Jan 29 16:15:52.957325 ignition[671]: no config URL provided Jan 29 16:15:52.957328 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:15:52.957332 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:15:52.957693 ignition[671]: config successfully fetched Jan 29 16:15:52.957709 ignition[671]: parsing config with SHA512: 23cd6c3b5de88f3e76d6a83a8b99b3c097bfb051df47bed654233cb02171abe16fef7b6699060a2e8850943c59719b677f0cac214a6b24e546c23783cc78d6ad Jan 29 16:15:52.961154 unknown[671]: fetched base config from "system" Jan 29 16:15:52.961283 unknown[671]: fetched user config from "vmware" Jan 29 16:15:52.961619 ignition[671]: fetch-offline: fetch-offline passed Jan 29 16:15:52.961768 ignition[671]: Ignition finished successfully Jan 29 16:15:52.962401 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:15:52.965121 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:15:52.970279 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:15:52.983450 systemd-networkd[806]: lo: Link UP Jan 29 16:15:52.983454 systemd-networkd[806]: lo: Gained carrier Jan 29 16:15:52.984428 systemd-networkd[806]: Enumeration completed Jan 29 16:15:52.984561 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:15:52.984690 systemd[1]: Reached target network.target - Network. Jan 29 16:15:52.984743 systemd-networkd[806]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 29 16:15:52.984769 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 16:15:52.988119 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:15:52.988826 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 29 16:15:52.988927 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 29 16:15:52.988597 systemd-networkd[806]: ens192: Link UP Jan 29 16:15:52.988599 systemd-networkd[806]: ens192: Gained carrier Jan 29 16:15:52.998650 ignition[810]: Ignition 2.20.0 Jan 29 16:15:52.998659 ignition[810]: Stage: kargs Jan 29 16:15:52.998765 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:15:52.998771 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 16:15:52.999479 ignition[810]: kargs: kargs passed Jan 29 16:15:52.999515 ignition[810]: Ignition finished successfully Jan 29 16:15:53.000695 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:15:53.004301 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:15:53.010619 ignition[818]: Ignition 2.20.0 Jan 29 16:15:53.010627 ignition[818]: Stage: disks Jan 29 16:15:53.010765 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:15:53.010772 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 16:15:53.011358 ignition[818]: disks: disks passed Jan 29 16:15:53.011387 ignition[818]: Ignition finished successfully Jan 29 16:15:53.012008 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:15:53.012511 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:15:53.012736 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:15:53.012983 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:15:53.013214 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:15:53.013433 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:15:53.021224 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:15:53.033659 systemd-fsck[827]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:15:53.034502 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:15:53.774167 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:15:53.831046 kernel: EXT4-fs (sda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:15:53.831238 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:15:53.831604 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:15:53.840134 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:15:53.841510 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:15:53.841796 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:15:53.841825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:15:53.841840 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:15:53.845069 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:15:53.846007 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:15:53.849986 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (835) Jan 29 16:15:53.850017 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:15:53.851648 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:15:53.851665 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:15:53.855036 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:15:53.855619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:15:53.875355 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:15:53.878942 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:15:53.881187 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:15:53.883342 initrd-setup-root[880]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:15:53.936663 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:15:53.941134 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:15:53.943631 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:15:53.946091 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:15:53.963786 ignition[947]: INFO : Ignition 2.20.0 Jan 29 16:15:53.963786 ignition[947]: INFO : Stage: mount Jan 29 16:15:53.964341 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:15:53.964341 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 16:15:53.964893 ignition[947]: INFO : mount: mount passed Jan 29 16:15:53.964893 ignition[947]: INFO : Ignition finished successfully Jan 29 16:15:53.965075 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:15:53.970151 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:15:53.980553 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:15:54.308159 systemd-networkd[806]: ens192: Gained IPv6LL Jan 29 16:15:54.770108 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:15:54.775216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:15:54.848050 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (959) Jan 29 16:15:54.863797 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:15:54.863830 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:15:54.863839 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:15:54.915051 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:15:54.922492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:15:54.937043 ignition[975]: INFO : Ignition 2.20.0 Jan 29 16:15:54.937043 ignition[975]: INFO : Stage: files Jan 29 16:15:54.937408 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:15:54.937408 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 16:15:54.937689 ignition[975]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:15:54.951009 ignition[975]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:15:54.951009 ignition[975]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:15:54.989691 ignition[975]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:15:54.990096 ignition[975]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:15:54.990408 unknown[975]: wrote ssh authorized keys file for user: core Jan 29 16:15:54.990807 ignition[975]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:15:55.020341 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:15:55.020341 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:15:55.072183 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:15:55.238229 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:15:55.238229 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:15:55.238710 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:15:55.732080 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:15:55.795216 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:15:55.795216 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:15:55.795623 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:15:55.795623 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:15:55.795623 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:15:55.795623 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:15:55.795623 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:15:55.795623 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:15:55.795623 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:15:55.795623 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:15:55.796917 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:15:55.796917 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:15:55.796917 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:15:55.796917 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:15:55.796917 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 16:15:56.218339 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:15:56.463183 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:15:56.463183 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 16:15:56.463625 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 16:15:56.463625 ignition[975]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jan 29 16:15:56.463902 ignition[975]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:15:56.463902 ignition[975]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:15:56.463902 ignition[975]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jan 29 16:15:56.463902 ignition[975]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Jan 29 16:15:56.463902 ignition[975]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:15:56.463902 ignition[975]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:15:56.463902 ignition[975]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Jan 29 16:15:56.463902 ignition[975]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 16:15:56.484326 ignition[975]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:15:56.486497 ignition[975]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:15:56.486497 ignition[975]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 16:15:56.486497 ignition[975]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:15:56.486497 ignition[975]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:15:56.486497 ignition[975]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:15:56.486497 ignition[975]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:15:56.486497 ignition[975]: INFO : files: files passed Jan 29 16:15:56.486497 ignition[975]: INFO : Ignition finished successfully Jan 29 16:15:56.487617 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:15:56.491126 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:15:56.492677 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:15:56.493220 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:15:56.493265 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:15:56.498432 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:15:56.498432 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:15:56.499773 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:15:56.500366 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:15:56.500707 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:15:56.504115 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:15:56.516322 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:15:56.516379 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:15:56.516657 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:15:56.516766 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:15:56.516960 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:15:56.517433 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:15:56.526040 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:15:56.529117 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:15:56.534562 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:15:56.534821 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:15:56.534977 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:15:56.535117 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:15:56.535183 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:15:56.535389 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:15:56.535612 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:15:56.535785 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:15:56.535974 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:15:56.536339 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:15:56.536534 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:15:56.536720 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:15:56.536935 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:15:56.537143 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:15:56.537332 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:15:56.537501 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:15:56.537563 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:15:56.537832 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:15:56.537980 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:15:56.538178 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:15:56.538222 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:15:56.538366 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:15:56.538429 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:15:56.538660 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:15:56.538721 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:15:56.538969 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:15:56.539112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:15:56.542047 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:15:56.542209 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:15:56.542406 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:15:56.542584 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:15:56.542654 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:15:56.542867 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:15:56.542911 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:15:56.543156 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:15:56.543220 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:15:56.543466 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:15:56.543523 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:15:56.551200 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:15:56.553161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:15:56.553271 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:15:56.553366 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:15:56.553648 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:15:56.553745 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:15:56.558084 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:15:56.558142 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:15:56.564335 ignition[1031]: INFO : Ignition 2.20.0 Jan 29 16:15:56.564335 ignition[1031]: INFO : Stage: umount Jan 29 16:15:56.564335 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:15:56.564335 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 16:15:56.564335 ignition[1031]: INFO : umount: umount passed Jan 29 16:15:56.564335 ignition[1031]: INFO : Ignition finished successfully Jan 29 16:15:56.564668 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:15:56.564740 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:15:56.565447 systemd[1]: Stopped target network.target - Network. Jan 29 16:15:56.566090 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:15:56.566131 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:15:56.566456 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:15:56.566482 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:15:56.566701 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:15:56.566723 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:15:56.566932 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:15:56.566955 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:15:56.567325 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:15:56.567566 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:15:56.568289 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:15:56.573183 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:15:56.573390 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:15:56.574900 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:15:56.575149 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:15:56.575199 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:15:56.576180 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:15:56.576688 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:15:56.576732 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:15:56.579123 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:15:56.579222 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:15:56.579249 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:15:56.579372 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 29 16:15:56.579394 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 16:15:56.579598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:15:56.579619 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:15:56.579817 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:15:56.579839 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:15:56.580172 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:15:56.580192 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:15:56.581267 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:15:56.582531 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:15:56.582567 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:15:56.587287 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:15:56.587363 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:15:56.587635 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:15:56.587715 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:15:56.588339 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:15:56.588375 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:15:56.588738 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:15:56.588755 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:15:56.588962 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:15:56.588986 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:15:56.589266 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:15:56.589288 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:15:56.589568 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:15:56.589592 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:15:56.593241 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:15:56.593338 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:15:56.593365 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:15:56.593542 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:15:56.593564 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:15:56.593671 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:15:56.593693 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:15:56.593797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:15:56.593817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:15:56.595520 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:15:56.595555 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:15:56.596225 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:15:56.596451 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:15:56.671674 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:15:56.671737 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:15:56.672148 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:15:56.672262 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:15:56.672294 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:15:56.676134 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:15:56.686165 systemd[1]: Switching root. Jan 29 16:15:56.713300 systemd-journald[217]: Journal stopped Jan 29 16:15:57.960693 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 29 16:15:57.960720 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:15:57.960729 kernel: SELinux: policy capability open_perms=1 Jan 29 16:15:57.960735 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:15:57.960741 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:15:57.960746 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:15:57.960754 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:15:57.960760 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:15:57.960766 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:15:57.960772 systemd[1]: Successfully loaded SELinux policy in 33.343ms. Jan 29 16:15:57.960779 kernel: audit: type=1403 audit(1738167357.390:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:15:57.960785 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.048ms. Jan 29 16:15:57.960793 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:15:57.960801 systemd[1]: Detected virtualization vmware. Jan 29 16:15:57.960808 systemd[1]: Detected architecture x86-64. Jan 29 16:15:57.960815 systemd[1]: Detected first boot. Jan 29 16:15:57.960822 systemd[1]: Initializing machine ID from random generator. Jan 29 16:15:57.960836 zram_generator::config[1076]: No configuration found. Jan 29 16:15:57.960928 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jan 29 16:15:57.960940 kernel: Guest personality initialized and is active Jan 29 16:15:57.960946 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:15:57.960952 kernel: Initialized host personality Jan 29 16:15:57.960958 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:15:57.960965 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:15:57.960976 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 16:15:57.960983 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jan 29 16:15:57.960990 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:15:57.960997 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:15:57.961005 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:15:57.961012 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:15:57.961020 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:15:57.966067 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:15:57.966082 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:15:57.966090 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:15:57.966097 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:15:57.966104 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:15:57.966110 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:15:57.966117 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:15:57.966127 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:15:57.966135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:15:57.966144 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:15:57.966151 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:15:57.966158 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:15:57.966165 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:15:57.966172 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:15:57.966179 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:15:57.966187 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:15:57.966194 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:15:57.966201 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:15:57.966208 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:15:57.966215 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:15:57.966222 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:15:57.966230 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:15:57.966254 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:15:57.966262 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:15:57.966269 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:15:57.966277 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:15:57.966284 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:15:57.966291 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:15:57.966314 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:15:57.966322 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:15:57.966329 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:15:57.966336 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:15:57.966343 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:15:57.966350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:15:57.966357 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:15:57.966364 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:15:57.966372 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:15:57.966380 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:15:57.966387 systemd[1]: Reached target machines.target - Containers. Jan 29 16:15:57.966394 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:15:57.966401 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jan 29 16:15:57.966408 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:15:57.966415 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:15:57.966422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:15:57.966430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:15:57.966441 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:15:57.966453 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:15:57.966464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:15:57.966476 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:15:57.966487 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:15:57.966498 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:15:57.966510 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:15:57.966518 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:15:57.966528 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:15:57.966536 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:15:57.966542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:15:57.966549 kernel: fuse: init (API version 7.39) Jan 29 16:15:57.966556 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:15:57.966563 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:15:57.966570 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:15:57.966577 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:15:57.966586 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:15:57.966594 systemd[1]: Stopped verity-setup.service. Jan 29 16:15:57.966602 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:15:57.966609 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:15:57.966616 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:15:57.966622 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:15:57.966629 kernel: ACPI: bus type drm_connector registered Jan 29 16:15:57.966636 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:15:57.966643 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:15:57.966669 systemd-journald[1166]: Collecting audit messages is disabled. Jan 29 16:15:57.966686 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:15:57.966694 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:15:57.966701 kernel: loop: module loaded Jan 29 16:15:57.966709 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:15:57.966716 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:15:57.966723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:15:57.966730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:15:57.966737 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:15:57.966744 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:15:57.966751 systemd-journald[1166]: Journal started Jan 29 16:15:57.966767 systemd-journald[1166]: Runtime Journal (/run/log/journal/ebbf2bc0f55545479349858e91d14d49) is 4.8M, max 38.6M, 33.8M free. Jan 29 16:15:57.971394 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:15:57.971421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:15:57.971435 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:15:57.971445 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:15:57.784280 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:15:57.795550 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:15:57.795851 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:15:57.971945 jq[1146]: true Jan 29 16:15:57.974257 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:15:57.973105 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:15:57.973209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:15:57.973480 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:15:57.973729 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:15:57.974115 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:15:57.974863 jq[1184]: true Jan 29 16:15:57.975384 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:15:57.990976 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:15:57.998152 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:15:58.004127 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:15:58.004361 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:15:58.004435 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:15:58.005306 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:15:58.012357 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:15:58.023844 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:15:58.024216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:15:58.032864 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:15:58.035933 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:15:58.036376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:15:58.050144 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:15:58.050986 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:15:58.051966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:15:58.053447 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:15:58.056740 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:15:58.058580 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:15:58.058830 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:15:58.059010 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:15:58.059316 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:15:58.082399 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:15:58.082858 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:15:58.084325 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:15:58.088667 systemd-journald[1166]: Time spent on flushing to /var/log/journal/ebbf2bc0f55545479349858e91d14d49 is 62.902ms for 1858 entries. Jan 29 16:15:58.088667 systemd-journald[1166]: System Journal (/var/log/journal/ebbf2bc0f55545479349858e91d14d49) is 8M, max 584.8M, 576.8M free. Jan 29 16:15:58.160507 systemd-journald[1166]: Received client request to flush runtime journal. Jan 29 16:15:58.160549 kernel: loop0: detected capacity change from 0 to 2960 Jan 29 16:15:58.160563 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:15:58.099979 ignition[1190]: Ignition 2.20.0 Jan 29 16:15:58.112528 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jan 29 16:15:58.100430 ignition[1190]: deleting config from guestinfo properties Jan 29 16:15:58.134419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:15:58.109903 ignition[1190]: Successfully deleted config Jan 29 16:15:58.146218 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:15:58.160702 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jan 29 16:15:58.160710 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jan 29 16:15:58.161624 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:15:58.171398 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:15:58.183200 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:15:58.184074 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:15:58.184140 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:15:58.190103 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:15:58.198547 udevadm[1249]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:15:58.221603 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:15:58.226460 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:15:58.227039 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 16:15:58.240095 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 29 16:15:58.240274 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 29 16:15:58.243223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:15:58.300043 kernel: loop3: detected capacity change from 0 to 147912 Jan 29 16:15:58.355096 kernel: loop4: detected capacity change from 0 to 2960 Jan 29 16:15:58.373105 kernel: loop5: detected capacity change from 0 to 138176 Jan 29 16:15:58.411044 kernel: loop6: detected capacity change from 0 to 210664 Jan 29 16:15:58.481819 kernel: loop7: detected capacity change from 0 to 147912 Jan 29 16:15:58.534321 (sd-merge)[1258]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jan 29 16:15:58.534933 (sd-merge)[1258]: Merged extensions into '/usr'. Jan 29 16:15:58.539365 systemd[1]: Reload requested from client PID 1222 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:15:58.539517 systemd[1]: Reloading... Jan 29 16:15:58.599074 zram_generator::config[1284]: No configuration found. Jan 29 16:15:58.688571 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 16:15:58.709684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:15:58.759208 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:15:58.759656 systemd[1]: Reloading finished in 219 ms. Jan 29 16:15:58.772442 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:15:58.774199 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:15:58.777295 systemd[1]: Starting ensure-sysext.service... Jan 29 16:15:58.779145 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:15:58.784197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:15:58.794782 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:15:58.794800 systemd[1]: Reloading... Jan 29 16:15:58.808545 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Jan 29 16:15:58.808738 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:15:58.808900 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:15:58.809429 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:15:58.809599 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jan 29 16:15:58.809639 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jan 29 16:15:58.811591 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:15:58.811599 systemd-tmpfiles[1343]: Skipping /boot Jan 29 16:15:58.817762 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:15:58.817770 systemd-tmpfiles[1343]: Skipping /boot Jan 29 16:15:58.863110 zram_generator::config[1374]: No configuration found. Jan 29 16:15:58.881407 ldconfig[1216]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:15:58.952040 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 16:15:58.965046 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:15:58.968096 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1384) Jan 29 16:15:58.989398 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 16:15:59.012596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:15:59.078568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 29 16:15:59.078960 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:15:59.079079 systemd[1]: Reloading finished in 284 ms. Jan 29 16:15:59.084043 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jan 29 16:15:59.088974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:15:59.090855 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:15:59.101110 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:15:59.117041 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 16:15:59.119866 systemd[1]: Finished ensure-sysext.service. Jan 29 16:15:59.135321 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:15:59.140239 (udev-worker)[1386]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jan 29 16:15:59.142216 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:15:59.155053 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:15:59.156182 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:15:59.158195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:15:59.160775 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:15:59.167157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:15:59.170091 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:15:59.171065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:15:59.171738 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:15:59.171858 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:15:59.173877 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:15:59.175899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:15:59.178760 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:15:59.182617 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:15:59.183525 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:15:59.183650 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:15:59.184167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:15:59.184301 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:15:59.184550 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:15:59.184651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:15:59.184869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:15:59.184962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:15:59.185799 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:15:59.185926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:15:59.191703 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:15:59.191808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:15:59.198311 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:15:59.200674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:15:59.206329 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:15:59.217237 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:15:59.217821 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:15:59.224758 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:15:59.235439 augenrules[1505]: No rules Jan 29 16:15:59.235342 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:15:59.238259 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:15:59.241256 lvm[1497]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:15:59.245862 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:15:59.248164 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:15:59.253291 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:15:59.279844 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:15:59.284661 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:15:59.284933 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:15:59.289232 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:15:59.289570 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:15:59.290166 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:15:59.295168 lvm[1523]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:15:59.327091 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:15:59.339962 systemd-resolved[1479]: Positive Trust Anchors: Jan 29 16:15:59.340168 systemd-resolved[1479]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:15:59.340226 systemd-resolved[1479]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:15:59.342478 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:15:59.342687 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:15:59.342839 systemd-networkd[1478]: lo: Link UP Jan 29 16:15:59.342841 systemd-networkd[1478]: lo: Gained carrier Jan 29 16:15:59.346890 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 29 16:15:59.347124 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 29 16:15:59.343771 systemd-networkd[1478]: Enumeration completed Jan 29 16:15:59.343824 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:15:59.343989 systemd-networkd[1478]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jan 29 16:15:59.346225 systemd-timesyncd[1481]: No network connectivity, watching for changes. Jan 29 16:15:59.346577 systemd-networkd[1478]: ens192: Link UP Jan 29 16:15:59.346695 systemd-networkd[1478]: ens192: Gained carrier Jan 29 16:15:59.350415 systemd-resolved[1479]: Defaulting to hostname 'linux'. Jan 29 16:15:59.350432 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Jan 29 16:15:59.350572 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:15:59.353510 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:15:59.369370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:15:59.369816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:15:59.370447 systemd[1]: Reached target network.target - Network. Jan 29 16:15:59.370544 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:15:59.370788 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:15:59.370938 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:15:59.371068 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:15:59.371258 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:15:59.371522 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:15:59.371636 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:15:59.371751 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:15:59.371774 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:15:59.371983 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:15:59.373238 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:15:59.374449 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:15:59.375885 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:15:59.376207 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:15:59.376360 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:15:59.377773 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:15:59.378124 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:15:59.378706 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:15:59.378865 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:15:59.378955 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:15:59.379137 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:15:59.379158 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:15:59.379914 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:15:59.382239 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:15:59.385126 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:15:59.386119 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:15:59.386227 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:15:59.388150 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:15:59.391158 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:15:59.407120 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:15:59.409235 jq[1537]: false Jan 29 16:15:59.411145 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:15:59.415147 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:15:59.415784 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:15:59.416580 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:15:59.418130 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:15:59.420097 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:15:59.422308 dbus-daemon[1536]: [system] SELinux support is enabled Jan 29 16:15:59.422583 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jan 29 16:15:59.423415 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:15:59.430302 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:15:59.435283 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:15:59.436332 jq[1552]: true Jan 29 16:15:59.435430 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:15:59.435601 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:15:59.435717 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:15:59.439506 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:15:59.439632 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:15:59.447225 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:15:59.447254 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:15:59.447427 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:15:59.447439 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:15:59.450363 jq[1558]: true Jan 29 16:15:59.455227 update_engine[1550]: I20250129 16:15:59.452649 1550 main.cc:92] Flatcar Update Engine starting Jan 29 16:15:59.456173 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:15:59.457214 update_engine[1550]: I20250129 16:15:59.457187 1550 update_check_scheduler.cc:74] Next update check in 9m18s Jan 29 16:15:59.462213 extend-filesystems[1538]: Found loop4 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found loop5 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found loop6 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found loop7 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found sda Jan 29 16:15:59.462213 extend-filesystems[1538]: Found sda1 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found sda2 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found sda3 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found usr Jan 29 16:15:59.462213 extend-filesystems[1538]: Found sda4 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found sda6 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found sda7 Jan 29 16:15:59.462213 extend-filesystems[1538]: Found sda9 Jan 29 16:15:59.462213 extend-filesystems[1538]: Checking size of /dev/sda9 Jan 29 16:15:59.464129 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:15:59.465218 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jan 29 16:15:59.471107 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jan 29 16:15:59.474138 tar[1557]: linux-amd64/helm Jan 29 16:15:59.476860 (ntainerd)[1573]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:15:59.496535 extend-filesystems[1538]: Old size kept for /dev/sda9 Jan 29 16:15:59.496535 extend-filesystems[1538]: Found sr0 Jan 29 16:15:59.495828 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:15:59.495963 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:15:59.528156 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jan 29 16:15:59.547332 unknown[1572]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jan 29 16:15:59.552538 unknown[1572]: Core dump limit set to -1 Jan 29 16:15:59.565722 bash[1598]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:15:59.566383 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:15:59.566909 systemd-logind[1549]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:15:59.566923 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:15:59.568800 systemd-logind[1549]: New seat seat0. Jan 29 16:15:59.572146 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:15:59.574451 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:17:12.544152 systemd-timesyncd[1481]: Contacted time server 144.202.66.214:123 (2.flatcar.pool.ntp.org). Jan 29 16:17:12.544188 systemd-timesyncd[1481]: Initial clock synchronization to Wed 2025-01-29 16:17:12.544059 UTC. Jan 29 16:17:12.544846 systemd-resolved[1479]: Clock change detected. Flushing caches. Jan 29 16:17:12.553572 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1380) Jan 29 16:17:12.593256 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:17:12.619263 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:17:12.632937 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:17:12.642981 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:17:12.654721 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:17:12.655957 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:17:12.662767 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:17:12.677728 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:17:12.684784 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:17:12.686622 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:17:12.687260 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:17:12.740283 containerd[1573]: time="2025-01-29T16:17:12.740242974Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:17:12.758466 containerd[1573]: time="2025-01-29T16:17:12.758439108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:17:12.759533 containerd[1573]: time="2025-01-29T16:17:12.759515795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:17:12.759609 containerd[1573]: time="2025-01-29T16:17:12.759600821Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:17:12.759645 containerd[1573]: time="2025-01-29T16:17:12.759637688Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:17:12.759771 containerd[1573]: time="2025-01-29T16:17:12.759762707Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:17:12.759811 containerd[1573]: time="2025-01-29T16:17:12.759804243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:17:12.759876 containerd[1573]: time="2025-01-29T16:17:12.759867309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:17:12.759914 containerd[1573]: time="2025-01-29T16:17:12.759907081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:17:12.760063 containerd[1573]: time="2025-01-29T16:17:12.760053408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:17:12.760103 containerd[1573]: time="2025-01-29T16:17:12.760095661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:17:12.760136 containerd[1573]: time="2025-01-29T16:17:12.760128606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:17:12.760476 containerd[1573]: time="2025-01-29T16:17:12.760157720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:17:12.760476 containerd[1573]: time="2025-01-29T16:17:12.760202866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:17:12.760476 containerd[1573]: time="2025-01-29T16:17:12.760317223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:17:12.760476 containerd[1573]: time="2025-01-29T16:17:12.760386336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:17:12.760476 containerd[1573]: time="2025-01-29T16:17:12.760395062Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:17:12.760476 containerd[1573]: time="2025-01-29T16:17:12.760436404Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:17:12.760476 containerd[1573]: time="2025-01-29T16:17:12.760463403Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:17:12.779942 containerd[1573]: time="2025-01-29T16:17:12.779923730Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:17:12.780031 containerd[1573]: time="2025-01-29T16:17:12.780021843Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:17:12.780107 containerd[1573]: time="2025-01-29T16:17:12.780097701Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:17:12.780153 containerd[1573]: time="2025-01-29T16:17:12.780140772Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:17:12.780203 containerd[1573]: time="2025-01-29T16:17:12.780195320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:17:12.780324 containerd[1573]: time="2025-01-29T16:17:12.780315070Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:17:12.780526 containerd[1573]: time="2025-01-29T16:17:12.780516894Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:17:12.780637 containerd[1573]: time="2025-01-29T16:17:12.780619371Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:17:12.780674 containerd[1573]: time="2025-01-29T16:17:12.780667353Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:17:12.780719 containerd[1573]: time="2025-01-29T16:17:12.780711371Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:17:12.780782 containerd[1573]: time="2025-01-29T16:17:12.780756451Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:17:12.780782 containerd[1573]: time="2025-01-29T16:17:12.780768429Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:17:12.780893 containerd[1573]: time="2025-01-29T16:17:12.780820574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:17:12.780893 containerd[1573]: time="2025-01-29T16:17:12.780832489Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:17:12.780893 containerd[1573]: time="2025-01-29T16:17:12.780844032Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:17:12.780893 containerd[1573]: time="2025-01-29T16:17:12.780852560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:17:12.780893 containerd[1573]: time="2025-01-29T16:17:12.780859965Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:17:12.780893 containerd[1573]: time="2025-01-29T16:17:12.780865540Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:17:12.780893 containerd[1573]: time="2025-01-29T16:17:12.780879945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781064 containerd[1573]: time="2025-01-29T16:17:12.780992799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781064 containerd[1573]: time="2025-01-29T16:17:12.781004520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781064 containerd[1573]: time="2025-01-29T16:17:12.781012525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781064 containerd[1573]: time="2025-01-29T16:17:12.781019302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781064 containerd[1573]: time="2025-01-29T16:17:12.781026798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781064 containerd[1573]: time="2025-01-29T16:17:12.781033570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781064 containerd[1573]: time="2025-01-29T16:17:12.781040219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781064 containerd[1573]: time="2025-01-29T16:17:12.781047620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781064 containerd[1573]: time="2025-01-29T16:17:12.781055336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781269 containerd[1573]: time="2025-01-29T16:17:12.781199900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781269 containerd[1573]: time="2025-01-29T16:17:12.781211654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781269 containerd[1573]: time="2025-01-29T16:17:12.781221168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781269 containerd[1573]: time="2025-01-29T16:17:12.781229202Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:17:12.781269 containerd[1573]: time="2025-01-29T16:17:12.781240370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781269 containerd[1573]: time="2025-01-29T16:17:12.781248738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781269 containerd[1573]: time="2025-01-29T16:17:12.781254384Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:17:12.781549 containerd[1573]: time="2025-01-29T16:17:12.781394245Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:17:12.781549 containerd[1573]: time="2025-01-29T16:17:12.781409823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:17:12.781549 containerd[1573]: time="2025-01-29T16:17:12.781416730Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:17:12.781549 containerd[1573]: time="2025-01-29T16:17:12.781423250Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:17:12.781549 containerd[1573]: time="2025-01-29T16:17:12.781428324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781549 containerd[1573]: time="2025-01-29T16:17:12.781435270Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:17:12.781549 containerd[1573]: time="2025-01-29T16:17:12.781475571Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:17:12.781549 containerd[1573]: time="2025-01-29T16:17:12.781486329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:17:12.781910 containerd[1573]: time="2025-01-29T16:17:12.781796951Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:17:12.781910 containerd[1573]: time="2025-01-29T16:17:12.781836043Z" level=info msg="Connect containerd service" Jan 29 16:17:12.781910 containerd[1573]: time="2025-01-29T16:17:12.781858483Z" level=info msg="using legacy CRI server" Jan 29 16:17:12.781910 containerd[1573]: time="2025-01-29T16:17:12.781863348Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:17:12.782191 containerd[1573]: time="2025-01-29T16:17:12.782073216Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:17:12.783777 containerd[1573]: time="2025-01-29T16:17:12.783765027Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:17:12.783955 containerd[1573]: time="2025-01-29T16:17:12.783946428Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:17:12.784059 containerd[1573]: time="2025-01-29T16:17:12.784007409Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:17:12.784059 containerd[1573]: time="2025-01-29T16:17:12.784029696Z" level=info msg="Start subscribing containerd event" Jan 29 16:17:12.784111 containerd[1573]: time="2025-01-29T16:17:12.784050288Z" level=info msg="Start recovering state" Jan 29 16:17:12.784181 containerd[1573]: time="2025-01-29T16:17:12.784161583Z" level=info msg="Start event monitor" Jan 29 16:17:12.784244 containerd[1573]: time="2025-01-29T16:17:12.784210598Z" level=info msg="Start snapshots syncer" Jan 29 16:17:12.784244 containerd[1573]: time="2025-01-29T16:17:12.784218975Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:17:12.784244 containerd[1573]: time="2025-01-29T16:17:12.784223880Z" level=info msg="Start streaming server" Jan 29 16:17:12.784363 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:17:12.784963 containerd[1573]: time="2025-01-29T16:17:12.784848400Z" level=info msg="containerd successfully booted in 0.046365s" Jan 29 16:17:12.862769 tar[1557]: linux-amd64/LICENSE Jan 29 16:17:12.863585 tar[1557]: linux-amd64/README.md Jan 29 16:17:12.871369 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:17:13.639695 systemd-networkd[1478]: ens192: Gained IPv6LL Jan 29 16:17:13.641022 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:17:13.641890 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:17:13.647814 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jan 29 16:17:13.649712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:17:13.653790 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:17:13.673802 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:17:13.682970 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 16:17:13.683100 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jan 29 16:17:13.683697 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:17:14.409674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:17:14.410085 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:17:14.411491 systemd[1]: Startup finished in 953ms (kernel) + 6.762s (initrd) + 4.122s (userspace) = 11.837s. Jan 29 16:17:14.414334 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:17:14.444227 login[1642]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:17:14.445989 login[1644]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:17:14.453046 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:17:14.458765 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:17:14.460509 systemd-logind[1549]: New session 1 of user core. Jan 29 16:17:14.464756 systemd-logind[1549]: New session 2 of user core. Jan 29 16:17:14.468408 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:17:14.475911 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:17:14.477793 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:17:14.479696 systemd-logind[1549]: New session c1 of user core. Jan 29 16:17:14.574428 systemd[1721]: Queued start job for default target default.target. Jan 29 16:17:14.581645 systemd[1721]: Created slice app.slice - User Application Slice. Jan 29 16:17:14.581671 systemd[1721]: Reached target paths.target - Paths. Jan 29 16:17:14.581703 systemd[1721]: Reached target timers.target - Timers. Jan 29 16:17:14.583604 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:17:14.591372 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:17:14.591412 systemd[1721]: Reached target sockets.target - Sockets. Jan 29 16:17:14.591442 systemd[1721]: Reached target basic.target - Basic System. Jan 29 16:17:14.591465 systemd[1721]: Reached target default.target - Main User Target. Jan 29 16:17:14.591482 systemd[1721]: Startup finished in 107ms. Jan 29 16:17:14.592050 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:17:14.593049 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:17:14.593787 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:17:14.980179 kubelet[1714]: E0129 16:17:14.980138 1714 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:17:14.981687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:17:14.981778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:17:14.982068 systemd[1]: kubelet.service: Consumed 599ms CPU time, 241M memory peak. Jan 29 16:17:25.196018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:17:25.207737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:17:25.329941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:17:25.332659 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:17:25.391874 kubelet[1766]: E0129 16:17:25.391805 1766 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:17:25.394930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:17:25.395045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:17:25.395335 systemd[1]: kubelet.service: Consumed 86ms CPU time, 97.6M memory peak. Jan 29 16:17:35.446131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:17:35.454710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:17:35.789611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:17:35.792752 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:17:35.832171 kubelet[1782]: E0129 16:17:35.832147 1782 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:17:35.833412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:17:35.833490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:17:35.833822 systemd[1]: kubelet.service: Consumed 89ms CPU time, 97.6M memory peak. Jan 29 16:17:45.946064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:17:45.953696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:17:46.183987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:17:46.194781 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:17:46.228554 kubelet[1798]: E0129 16:17:46.228484 1798 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:17:46.230058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:17:46.230165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:17:46.230358 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.8M memory peak. Jan 29 16:17:52.718950 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:17:52.726844 systemd[1]: Started sshd@0-139.178.70.99:22-139.178.89.65:43338.service - OpenSSH per-connection server daemon (139.178.89.65:43338). Jan 29 16:17:52.770009 sshd[1807]: Accepted publickey for core from 139.178.89.65 port 43338 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:17:52.770836 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:17:52.773894 systemd-logind[1549]: New session 3 of user core. Jan 29 16:17:52.782723 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:17:52.843784 systemd[1]: Started sshd@1-139.178.70.99:22-139.178.89.65:43340.service - OpenSSH per-connection server daemon (139.178.89.65:43340). Jan 29 16:17:52.875974 sshd[1812]: Accepted publickey for core from 139.178.89.65 port 43340 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:17:52.876814 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:17:52.880004 systemd-logind[1549]: New session 4 of user core. Jan 29 16:17:52.884717 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:17:52.932984 sshd[1814]: Connection closed by 139.178.89.65 port 43340 Jan 29 16:17:52.933827 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 29 16:17:52.941969 systemd[1]: sshd@1-139.178.70.99:22-139.178.89.65:43340.service: Deactivated successfully. Jan 29 16:17:52.943082 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:17:52.943537 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:17:52.944723 systemd[1]: Started sshd@2-139.178.70.99:22-139.178.89.65:43350.service - OpenSSH per-connection server daemon (139.178.89.65:43350). Jan 29 16:17:52.945758 systemd-logind[1549]: Removed session 4. Jan 29 16:17:52.978510 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 43350 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:17:52.979564 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:17:52.982296 systemd-logind[1549]: New session 5 of user core. Jan 29 16:17:52.993715 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:17:53.040069 sshd[1822]: Connection closed by 139.178.89.65 port 43350 Jan 29 16:17:53.040367 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Jan 29 16:17:53.051825 systemd[1]: sshd@2-139.178.70.99:22-139.178.89.65:43350.service: Deactivated successfully. Jan 29 16:17:53.052710 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:17:53.053112 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:17:53.054489 systemd[1]: Started sshd@3-139.178.70.99:22-139.178.89.65:43364.service - OpenSSH per-connection server daemon (139.178.89.65:43364). Jan 29 16:17:53.056017 systemd-logind[1549]: Removed session 5. Jan 29 16:17:53.100131 sshd[1827]: Accepted publickey for core from 139.178.89.65 port 43364 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:17:53.100907 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:17:53.103364 systemd-logind[1549]: New session 6 of user core. Jan 29 16:17:53.112686 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:17:53.162205 sshd[1830]: Connection closed by 139.178.89.65 port 43364 Jan 29 16:17:53.161636 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 29 16:17:53.170886 systemd[1]: sshd@3-139.178.70.99:22-139.178.89.65:43364.service: Deactivated successfully. Jan 29 16:17:53.171889 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:17:53.172292 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:17:53.175827 systemd[1]: Started sshd@4-139.178.70.99:22-139.178.89.65:43366.service - OpenSSH per-connection server daemon (139.178.89.65:43366). Jan 29 16:17:53.176801 systemd-logind[1549]: Removed session 6. Jan 29 16:17:53.208789 sshd[1835]: Accepted publickey for core from 139.178.89.65 port 43366 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:17:53.209738 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:17:53.213062 systemd-logind[1549]: New session 7 of user core. Jan 29 16:17:53.222782 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:17:53.314175 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:17:53.314345 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:17:53.331988 sudo[1839]: pam_unix(sudo:session): session closed for user root Jan 29 16:17:53.332773 sshd[1838]: Connection closed by 139.178.89.65 port 43366 Jan 29 16:17:53.333132 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Jan 29 16:17:53.341886 systemd[1]: sshd@4-139.178.70.99:22-139.178.89.65:43366.service: Deactivated successfully. Jan 29 16:17:53.343303 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:17:53.343913 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:17:53.349866 systemd[1]: Started sshd@5-139.178.70.99:22-139.178.89.65:43378.service - OpenSSH per-connection server daemon (139.178.89.65:43378). Jan 29 16:17:53.351984 systemd-logind[1549]: Removed session 7. Jan 29 16:17:53.383343 sshd[1844]: Accepted publickey for core from 139.178.89.65 port 43378 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:17:53.384358 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:17:53.387802 systemd-logind[1549]: New session 8 of user core. Jan 29 16:17:53.395900 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:17:53.443917 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:17:53.444366 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:17:53.446274 sudo[1849]: pam_unix(sudo:session): session closed for user root Jan 29 16:17:53.449381 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:17:53.449536 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:17:53.458910 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:17:53.474282 augenrules[1871]: No rules Jan 29 16:17:53.475010 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:17:53.475228 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:17:53.476735 sudo[1848]: pam_unix(sudo:session): session closed for user root Jan 29 16:17:53.477448 sshd[1847]: Connection closed by 139.178.89.65 port 43378 Jan 29 16:17:53.477757 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Jan 29 16:17:53.487130 systemd[1]: sshd@5-139.178.70.99:22-139.178.89.65:43378.service: Deactivated successfully. Jan 29 16:17:53.488089 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:17:53.488904 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:17:53.490117 systemd[1]: Started sshd@6-139.178.70.99:22-139.178.89.65:43382.service - OpenSSH per-connection server daemon (139.178.89.65:43382). Jan 29 16:17:53.490596 systemd-logind[1549]: Removed session 8. Jan 29 16:17:53.523385 sshd[1879]: Accepted publickey for core from 139.178.89.65 port 43382 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:17:53.524135 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:17:53.527680 systemd-logind[1549]: New session 9 of user core. Jan 29 16:17:53.536699 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:17:53.583865 sudo[1883]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:17:53.584020 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:17:54.018797 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:17:54.018872 (dockerd)[1901]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:17:54.320608 dockerd[1901]: time="2025-01-29T16:17:54.320504409Z" level=info msg="Starting up" Jan 29 16:17:54.387978 dockerd[1901]: time="2025-01-29T16:17:54.387945681Z" level=info msg="Loading containers: start." Jan 29 16:17:54.592627 kernel: Initializing XFRM netlink socket Jan 29 16:17:54.805242 systemd-networkd[1478]: docker0: Link UP Jan 29 16:17:54.850276 dockerd[1901]: time="2025-01-29T16:17:54.850255045Z" level=info msg="Loading containers: done." Jan 29 16:17:54.859452 dockerd[1901]: time="2025-01-29T16:17:54.859220187Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:17:54.859452 dockerd[1901]: time="2025-01-29T16:17:54.859275920Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:17:54.859452 dockerd[1901]: time="2025-01-29T16:17:54.859328942Z" level=info msg="Daemon has completed initialization" Jan 29 16:17:54.873374 dockerd[1901]: time="2025-01-29T16:17:54.873324923Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:17:54.873534 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:17:55.618628 containerd[1573]: time="2025-01-29T16:17:55.618328203Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 16:17:56.236951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210255904.mount: Deactivated successfully. Jan 29 16:17:56.237687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:17:56.241659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:17:56.519796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:17:56.522610 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:17:56.550790 kubelet[2108]: E0129 16:17:56.550766 2108 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:17:56.552254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:17:56.552340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:17:56.552615 systemd[1]: kubelet.service: Consumed 86ms CPU time, 97.5M memory peak. Jan 29 16:17:57.511770 containerd[1573]: time="2025-01-29T16:17:57.511221881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:17:57.512243 containerd[1573]: time="2025-01-29T16:17:57.512222618Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 16:17:57.512676 containerd[1573]: time="2025-01-29T16:17:57.512664374Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:17:57.515067 containerd[1573]: time="2025-01-29T16:17:57.515046055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:17:57.516625 containerd[1573]: time="2025-01-29T16:17:57.516609126Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.89772879s" Jan 29 16:17:57.516660 containerd[1573]: time="2025-01-29T16:17:57.516629119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 16:17:57.528571 containerd[1573]: time="2025-01-29T16:17:57.528492940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 16:17:57.578029 update_engine[1550]: I20250129 16:17:57.577649 1550 update_attempter.cc:509] Updating boot flags... Jan 29 16:17:57.603592 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2175) Jan 29 16:17:57.644611 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2176) Jan 29 16:17:59.356061 containerd[1573]: time="2025-01-29T16:17:59.356004228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:17:59.364368 containerd[1573]: time="2025-01-29T16:17:59.364321263Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 16:17:59.370235 containerd[1573]: time="2025-01-29T16:17:59.370196577Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:17:59.375911 containerd[1573]: time="2025-01-29T16:17:59.375870267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:17:59.376690 containerd[1573]: time="2025-01-29T16:17:59.376611332Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.848095947s" Jan 29 16:17:59.376690 containerd[1573]: time="2025-01-29T16:17:59.376633166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 16:17:59.391208 containerd[1573]: time="2025-01-29T16:17:59.391182749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 16:18:00.516731 containerd[1573]: time="2025-01-29T16:18:00.516693914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:00.521819 containerd[1573]: time="2025-01-29T16:18:00.521782905Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 16:18:00.529249 containerd[1573]: time="2025-01-29T16:18:00.529210267Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:00.534483 containerd[1573]: time="2025-01-29T16:18:00.534445573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:00.535365 containerd[1573]: time="2025-01-29T16:18:00.535127072Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.143921336s" Jan 29 16:18:00.535365 containerd[1573]: time="2025-01-29T16:18:00.535149128Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 16:18:00.552673 containerd[1573]: time="2025-01-29T16:18:00.552651542Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:18:01.443103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752007051.mount: Deactivated successfully. Jan 29 16:18:01.728583 containerd[1573]: time="2025-01-29T16:18:01.728359086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:01.733588 containerd[1573]: time="2025-01-29T16:18:01.733543022Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 16:18:01.738788 containerd[1573]: time="2025-01-29T16:18:01.738752493Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:01.745117 containerd[1573]: time="2025-01-29T16:18:01.745088255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:01.745733 containerd[1573]: time="2025-01-29T16:18:01.745447794Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.192680813s" Jan 29 16:18:01.745733 containerd[1573]: time="2025-01-29T16:18:01.745478622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 16:18:01.762375 containerd[1573]: time="2025-01-29T16:18:01.762340909Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:18:02.305413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462588102.mount: Deactivated successfully. Jan 29 16:18:03.050045 containerd[1573]: time="2025-01-29T16:18:03.050008053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:03.052376 containerd[1573]: time="2025-01-29T16:18:03.052266047Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 16:18:03.064385 containerd[1573]: time="2025-01-29T16:18:03.064355362Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:03.071374 containerd[1573]: time="2025-01-29T16:18:03.071354203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:03.072180 containerd[1573]: time="2025-01-29T16:18:03.071950279Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.309523669s" Jan 29 16:18:03.072180 containerd[1573]: time="2025-01-29T16:18:03.071968885Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:18:03.087151 containerd[1573]: time="2025-01-29T16:18:03.087090365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 16:18:03.689724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641859105.mount: Deactivated successfully. Jan 29 16:18:03.691825 containerd[1573]: time="2025-01-29T16:18:03.691632375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:03.692011 containerd[1573]: time="2025-01-29T16:18:03.691986574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 16:18:03.692572 containerd[1573]: time="2025-01-29T16:18:03.692200762Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:03.693435 containerd[1573]: time="2025-01-29T16:18:03.693413927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:03.694017 containerd[1573]: time="2025-01-29T16:18:03.693884644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 606.749928ms" Jan 29 16:18:03.694017 containerd[1573]: time="2025-01-29T16:18:03.693901079Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 16:18:03.706997 containerd[1573]: time="2025-01-29T16:18:03.706932296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 16:18:04.159536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257935819.mount: Deactivated successfully. Jan 29 16:18:06.695868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:18:06.705659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:07.230622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:07.238732 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:18:07.448524 kubelet[2321]: E0129 16:18:07.448299 2321 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:18:07.450913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:18:07.451006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:18:07.451407 systemd[1]: kubelet.service: Consumed 107ms CPU time, 95.2M memory peak. Jan 29 16:18:08.019969 containerd[1573]: time="2025-01-29T16:18:08.019911619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:08.029867 containerd[1573]: time="2025-01-29T16:18:08.029811779Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 16:18:08.043895 containerd[1573]: time="2025-01-29T16:18:08.043853536Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:08.052824 containerd[1573]: time="2025-01-29T16:18:08.052793967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:08.053768 containerd[1573]: time="2025-01-29T16:18:08.053624760Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.34667203s" Jan 29 16:18:08.053768 containerd[1573]: time="2025-01-29T16:18:08.053649206Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 16:18:09.842175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:09.842386 systemd[1]: kubelet.service: Consumed 107ms CPU time, 95.2M memory peak. Jan 29 16:18:09.851707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:09.864356 systemd[1]: Reload requested from client PID 2393 ('systemctl') (unit session-9.scope)... Jan 29 16:18:09.864458 systemd[1]: Reloading... Jan 29 16:18:09.937579 zram_generator::config[2438]: No configuration found. Jan 29 16:18:09.990892 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 16:18:10.010145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:18:10.073867 systemd[1]: Reloading finished in 209 ms. Jan 29 16:18:10.156170 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 16:18:10.156248 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 16:18:10.156452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:10.164026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:10.511067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:10.525772 (kubelet)[2506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:18:10.563803 kubelet[2506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:18:10.563803 kubelet[2506]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:18:10.563803 kubelet[2506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:18:10.569021 kubelet[2506]: I0129 16:18:10.568993 2506 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:18:10.818059 kubelet[2506]: I0129 16:18:10.817985 2506 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:18:10.818059 kubelet[2506]: I0129 16:18:10.818007 2506 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:18:10.818335 kubelet[2506]: I0129 16:18:10.818167 2506 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:18:10.859399 kubelet[2506]: I0129 16:18:10.859371 2506 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:18:10.862037 kubelet[2506]: E0129 16:18:10.862017 2506 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:10.871933 kubelet[2506]: I0129 16:18:10.871731 2506 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:18:10.872779 kubelet[2506]: I0129 16:18:10.872745 2506 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:18:10.874027 kubelet[2506]: I0129 16:18:10.872780 2506 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:18:10.874429 kubelet[2506]: I0129 16:18:10.874412 2506 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:18:10.874429 kubelet[2506]: I0129 16:18:10.874429 2506 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:18:10.874516 kubelet[2506]: I0129 16:18:10.874503 2506 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:18:10.875703 kubelet[2506]: W0129 16:18:10.875665 2506 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:10.875755 kubelet[2506]: E0129 16:18:10.875707 2506 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:10.876551 kubelet[2506]: I0129 16:18:10.876535 2506 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:18:10.876608 kubelet[2506]: I0129 16:18:10.876551 2506 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:18:10.877197 kubelet[2506]: I0129 16:18:10.877033 2506 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:18:10.877197 kubelet[2506]: I0129 16:18:10.877057 2506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:18:10.880398 kubelet[2506]: W0129 16:18:10.880353 2506 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:10.880398 kubelet[2506]: E0129 16:18:10.880385 2506 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:10.880940 kubelet[2506]: I0129 16:18:10.880726 2506 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:18:10.882349 kubelet[2506]: I0129 16:18:10.881898 2506 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:18:10.882349 kubelet[2506]: W0129 16:18:10.881945 2506 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:18:10.882349 kubelet[2506]: I0129 16:18:10.882315 2506 server.go:1264] "Started kubelet" Jan 29 16:18:10.884739 kubelet[2506]: I0129 16:18:10.884714 2506 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:18:10.886596 kubelet[2506]: I0129 16:18:10.886565 2506 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:18:10.890585 kubelet[2506]: I0129 16:18:10.890514 2506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:18:10.890727 kubelet[2506]: I0129 16:18:10.890711 2506 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:18:10.890895 kubelet[2506]: E0129 16:18:10.890817 2506 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.99:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f361fae2bf233 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:18:10.882302515 +0000 UTC m=+0.354482464,LastTimestamp:2025-01-29 16:18:10.882302515 +0000 UTC m=+0.354482464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:18:10.892357 kubelet[2506]: I0129 16:18:10.892169 2506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:18:10.897287 kubelet[2506]: E0129 16:18:10.897271 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:10.897375 kubelet[2506]: I0129 16:18:10.897295 2506 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:18:10.897375 kubelet[2506]: I0129 16:18:10.897345 2506 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:18:10.897375 kubelet[2506]: I0129 16:18:10.897369 2506 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:18:10.898291 kubelet[2506]: W0129 16:18:10.897620 2506 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:10.898291 kubelet[2506]: E0129 16:18:10.897656 2506 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:10.898291 kubelet[2506]: E0129 16:18:10.897766 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="200ms" Jan 29 16:18:10.898503 kubelet[2506]: E0129 16:18:10.898487 2506 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:18:10.898649 kubelet[2506]: I0129 16:18:10.898637 2506 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:18:10.898725 kubelet[2506]: I0129 16:18:10.898711 2506 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:18:10.899408 kubelet[2506]: I0129 16:18:10.899337 2506 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:18:10.909541 kubelet[2506]: I0129 16:18:10.909512 2506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:18:10.910570 kubelet[2506]: I0129 16:18:10.910307 2506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:18:10.910570 kubelet[2506]: I0129 16:18:10.910327 2506 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:18:10.910570 kubelet[2506]: I0129 16:18:10.910339 2506 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:18:10.910570 kubelet[2506]: E0129 16:18:10.910362 2506 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:18:10.919720 kubelet[2506]: W0129 16:18:10.919645 2506 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:10.919720 kubelet[2506]: E0129 16:18:10.919687 2506 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:10.923825 kubelet[2506]: I0129 16:18:10.923807 2506 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:18:10.923825 kubelet[2506]: I0129 16:18:10.923818 2506 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:18:10.923825 kubelet[2506]: I0129 16:18:10.923828 2506 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:18:10.930958 kubelet[2506]: I0129 16:18:10.930938 2506 policy_none.go:49] "None policy: Start" Jan 29 16:18:10.931460 kubelet[2506]: I0129 16:18:10.931440 2506 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:18:10.931460 kubelet[2506]: I0129 16:18:10.931455 2506 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:18:10.973648 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:18:10.982966 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:18:10.985828 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:18:10.995094 kubelet[2506]: I0129 16:18:10.995078 2506 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:18:10.995576 kubelet[2506]: I0129 16:18:10.995269 2506 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:18:10.995576 kubelet[2506]: I0129 16:18:10.995344 2506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:18:10.996285 kubelet[2506]: E0129 16:18:10.996273 2506 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 16:18:10.998074 kubelet[2506]: I0129 16:18:10.998062 2506 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:18:10.998369 kubelet[2506]: E0129 16:18:10.998354 2506 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Jan 29 16:18:11.010783 kubelet[2506]: I0129 16:18:11.010764 2506 topology_manager.go:215] "Topology Admit Handler" podUID="3c741630038eabec7021a49e217952a1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 16:18:11.011677 kubelet[2506]: I0129 16:18:11.011653 2506 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 16:18:11.013383 kubelet[2506]: I0129 16:18:11.012631 2506 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 16:18:11.019543 systemd[1]: Created slice kubepods-burstable-pod3c741630038eabec7021a49e217952a1.slice - libcontainer container kubepods-burstable-pod3c741630038eabec7021a49e217952a1.slice. Jan 29 16:18:11.044634 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 29 16:18:11.048213 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 29 16:18:11.098920 kubelet[2506]: E0129 16:18:11.098827 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="400ms" Jan 29 16:18:11.199601 kubelet[2506]: I0129 16:18:11.199380 2506 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c741630038eabec7021a49e217952a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c741630038eabec7021a49e217952a1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:18:11.199601 kubelet[2506]: I0129 16:18:11.199413 2506 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:11.199601 kubelet[2506]: I0129 16:18:11.199429 2506 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:11.199601 kubelet[2506]: I0129 16:18:11.199442 2506 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c741630038eabec7021a49e217952a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c741630038eabec7021a49e217952a1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:18:11.199601 kubelet[2506]: I0129 16:18:11.199457 2506 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c741630038eabec7021a49e217952a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c741630038eabec7021a49e217952a1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:18:11.199802 kubelet[2506]: I0129 16:18:11.199469 2506 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:11.199802 kubelet[2506]: I0129 16:18:11.199480 2506 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:11.199802 kubelet[2506]: I0129 16:18:11.199492 2506 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:11.199802 kubelet[2506]: I0129 16:18:11.199503 2506 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:18:11.200343 kubelet[2506]: I0129 16:18:11.200198 2506 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:18:11.200503 kubelet[2506]: E0129 16:18:11.200486 2506 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Jan 29 16:18:11.343642 containerd[1573]: time="2025-01-29T16:18:11.343503687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c741630038eabec7021a49e217952a1,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:11.347370 containerd[1573]: time="2025-01-29T16:18:11.347343707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:11.349888 containerd[1573]: time="2025-01-29T16:18:11.349839495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:11.499360 kubelet[2506]: E0129 16:18:11.499330 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="800ms" Jan 29 16:18:11.601894 kubelet[2506]: I0129 16:18:11.601768 2506 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:18:11.602207 kubelet[2506]: E0129 16:18:11.601996 2506 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Jan 29 16:18:11.737987 kubelet[2506]: W0129 16:18:11.737920 2506 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:11.737987 kubelet[2506]: E0129 16:18:11.737971 2506 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:11.849847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1030209055.mount: Deactivated successfully. Jan 29 16:18:11.882353 containerd[1573]: time="2025-01-29T16:18:11.881684325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:11.894227 containerd[1573]: time="2025-01-29T16:18:11.894042389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:18:11.894347 kubelet[2506]: W0129 16:18:11.894304 2506 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:11.894347 kubelet[2506]: E0129 16:18:11.894346 2506 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:11.917839 containerd[1573]: time="2025-01-29T16:18:11.917755305Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:11.924508 containerd[1573]: time="2025-01-29T16:18:11.924480457Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:11.931839 containerd[1573]: time="2025-01-29T16:18:11.931803957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:18:11.940981 containerd[1573]: time="2025-01-29T16:18:11.939553495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:11.940981 containerd[1573]: time="2025-01-29T16:18:11.940652014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.017622ms" Jan 29 16:18:11.944487 containerd[1573]: time="2025-01-29T16:18:11.944464940Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:11.948683 containerd[1573]: time="2025-01-29T16:18:11.948643609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:18:11.949382 containerd[1573]: time="2025-01-29T16:18:11.949359418Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 599.475851ms" Jan 29 16:18:11.963076 containerd[1573]: time="2025-01-29T16:18:11.963035933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 615.63648ms" Jan 29 16:18:12.010870 kubelet[2506]: W0129 16:18:12.010821 2506 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:12.010870 kubelet[2506]: E0129 16:18:12.010869 2506 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:12.280371 containerd[1573]: time="2025-01-29T16:18:12.279414798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:12.280640 containerd[1573]: time="2025-01-29T16:18:12.280416495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:12.280640 containerd[1573]: time="2025-01-29T16:18:12.280437764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:12.280640 containerd[1573]: time="2025-01-29T16:18:12.280532253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:12.288104 containerd[1573]: time="2025-01-29T16:18:12.286199400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:12.288104 containerd[1573]: time="2025-01-29T16:18:12.286491166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:12.288104 containerd[1573]: time="2025-01-29T16:18:12.286504821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:12.288104 containerd[1573]: time="2025-01-29T16:18:12.286550630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:12.294234 containerd[1573]: time="2025-01-29T16:18:12.293716958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:12.294234 containerd[1573]: time="2025-01-29T16:18:12.293798722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:12.294234 containerd[1573]: time="2025-01-29T16:18:12.293809249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:12.294234 containerd[1573]: time="2025-01-29T16:18:12.293857603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:12.300114 kubelet[2506]: E0129 16:18:12.300082 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="1.6s" Jan 29 16:18:12.315718 systemd[1]: Started cri-containerd-714bcb5bf3bde3a3dabf1bc0b2e1c3351c117d027f663a7c7a73228a3d154858.scope - libcontainer container 714bcb5bf3bde3a3dabf1bc0b2e1c3351c117d027f663a7c7a73228a3d154858. Jan 29 16:18:12.319134 systemd[1]: Started cri-containerd-4dc025ee40f3e695b9a8b75ba673bc54cd5dad889277a846f8751030892a92de.scope - libcontainer container 4dc025ee40f3e695b9a8b75ba673bc54cd5dad889277a846f8751030892a92de. Jan 29 16:18:12.320404 systemd[1]: Started cri-containerd-51997155d8028047fd85ce97827ba2b2a15fd4b5c186b260f5b01d819df6a371.scope - libcontainer container 51997155d8028047fd85ce97827ba2b2a15fd4b5c186b260f5b01d819df6a371. Jan 29 16:18:12.356226 containerd[1573]: time="2025-01-29T16:18:12.356197160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c741630038eabec7021a49e217952a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"714bcb5bf3bde3a3dabf1bc0b2e1c3351c117d027f663a7c7a73228a3d154858\"" Jan 29 16:18:12.378030 containerd[1573]: time="2025-01-29T16:18:12.377578627Z" level=info msg="CreateContainer within sandbox \"714bcb5bf3bde3a3dabf1bc0b2e1c3351c117d027f663a7c7a73228a3d154858\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:18:12.382542 containerd[1573]: time="2025-01-29T16:18:12.382526030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"51997155d8028047fd85ce97827ba2b2a15fd4b5c186b260f5b01d819df6a371\"" Jan 29 16:18:12.397036 containerd[1573]: time="2025-01-29T16:18:12.397019100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dc025ee40f3e695b9a8b75ba673bc54cd5dad889277a846f8751030892a92de\"" Jan 29 16:18:12.398491 containerd[1573]: time="2025-01-29T16:18:12.398470687Z" level=info msg="CreateContainer within sandbox \"4dc025ee40f3e695b9a8b75ba673bc54cd5dad889277a846f8751030892a92de\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:18:12.398642 containerd[1573]: time="2025-01-29T16:18:12.398627095Z" level=info msg="CreateContainer within sandbox \"51997155d8028047fd85ce97827ba2b2a15fd4b5c186b260f5b01d819df6a371\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:18:12.403312 kubelet[2506]: I0129 16:18:12.403254 2506 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:18:12.403435 kubelet[2506]: E0129 16:18:12.403420 2506 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Jan 29 16:18:12.433984 kubelet[2506]: W0129 16:18:12.433940 2506 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:12.433984 kubelet[2506]: E0129 16:18:12.433985 2506 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:12.527853 containerd[1573]: time="2025-01-29T16:18:12.527746438Z" level=info msg="CreateContainer within sandbox \"714bcb5bf3bde3a3dabf1bc0b2e1c3351c117d027f663a7c7a73228a3d154858\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ec8634ce1431f95c8a068b3eba98a81d70a16f827dd63efb7e59728fceb1d91\"" Jan 29 16:18:12.528289 containerd[1573]: time="2025-01-29T16:18:12.528252085Z" level=info msg="StartContainer for \"1ec8634ce1431f95c8a068b3eba98a81d70a16f827dd63efb7e59728fceb1d91\"" Jan 29 16:18:12.531595 containerd[1573]: time="2025-01-29T16:18:12.531031094Z" level=info msg="CreateContainer within sandbox \"51997155d8028047fd85ce97827ba2b2a15fd4b5c186b260f5b01d819df6a371\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b13e59cf533c6d312400bf98f7f7207c664d954664050b679513bb1c00a3307b\"" Jan 29 16:18:12.532287 containerd[1573]: time="2025-01-29T16:18:12.532272201Z" level=info msg="CreateContainer within sandbox \"4dc025ee40f3e695b9a8b75ba673bc54cd5dad889277a846f8751030892a92de\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d972283b2dfbca1fef373325918f87192230668ce94efebd30abeac6ed369b83\"" Jan 29 16:18:12.533064 containerd[1573]: time="2025-01-29T16:18:12.533052751Z" level=info msg="StartContainer for \"b13e59cf533c6d312400bf98f7f7207c664d954664050b679513bb1c00a3307b\"" Jan 29 16:18:12.538128 containerd[1573]: time="2025-01-29T16:18:12.538062439Z" level=info msg="StartContainer for \"d972283b2dfbca1fef373325918f87192230668ce94efebd30abeac6ed369b83\"" Jan 29 16:18:12.552985 systemd[1]: Started cri-containerd-1ec8634ce1431f95c8a068b3eba98a81d70a16f827dd63efb7e59728fceb1d91.scope - libcontainer container 1ec8634ce1431f95c8a068b3eba98a81d70a16f827dd63efb7e59728fceb1d91. Jan 29 16:18:12.563763 systemd[1]: Started cri-containerd-b13e59cf533c6d312400bf98f7f7207c664d954664050b679513bb1c00a3307b.scope - libcontainer container b13e59cf533c6d312400bf98f7f7207c664d954664050b679513bb1c00a3307b. Jan 29 16:18:12.573754 systemd[1]: Started cri-containerd-d972283b2dfbca1fef373325918f87192230668ce94efebd30abeac6ed369b83.scope - libcontainer container d972283b2dfbca1fef373325918f87192230668ce94efebd30abeac6ed369b83. Jan 29 16:18:12.642908 containerd[1573]: time="2025-01-29T16:18:12.642879247Z" level=info msg="StartContainer for \"b13e59cf533c6d312400bf98f7f7207c664d954664050b679513bb1c00a3307b\" returns successfully" Jan 29 16:18:12.642994 containerd[1573]: time="2025-01-29T16:18:12.642981179Z" level=info msg="StartContainer for \"1ec8634ce1431f95c8a068b3eba98a81d70a16f827dd63efb7e59728fceb1d91\" returns successfully" Jan 29 16:18:12.643020 containerd[1573]: time="2025-01-29T16:18:12.643001603Z" level=info msg="StartContainer for \"d972283b2dfbca1fef373325918f87192230668ce94efebd30abeac6ed369b83\" returns successfully" Jan 29 16:18:12.941639 kubelet[2506]: E0129 16:18:12.941551 2506 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:13.640672 kubelet[2506]: W0129 16:18:13.640637 2506 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:13.640672 kubelet[2506]: E0129 16:18:13.640674 2506 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jan 29 16:18:14.004481 kubelet[2506]: I0129 16:18:14.004458 2506 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:18:14.810742 kubelet[2506]: E0129 16:18:14.810718 2506 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 16:18:14.958727 kubelet[2506]: I0129 16:18:14.958636 2506 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 16:18:14.967952 kubelet[2506]: E0129 16:18:14.967914 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.068818 kubelet[2506]: E0129 16:18:15.068615 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.169860 kubelet[2506]: E0129 16:18:15.169298 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.269884 kubelet[2506]: E0129 16:18:15.269835 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.370660 kubelet[2506]: E0129 16:18:15.370552 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.471422 kubelet[2506]: E0129 16:18:15.471381 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.572065 kubelet[2506]: E0129 16:18:15.572016 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.672651 kubelet[2506]: E0129 16:18:15.672514 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.773140 kubelet[2506]: E0129 16:18:15.773107 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.873857 kubelet[2506]: E0129 16:18:15.873827 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:15.973963 kubelet[2506]: E0129 16:18:15.973889 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.074517 kubelet[2506]: E0129 16:18:16.074494 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.175106 kubelet[2506]: E0129 16:18:16.175080 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.275613 kubelet[2506]: E0129 16:18:16.275513 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.376318 kubelet[2506]: E0129 16:18:16.376268 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.476690 kubelet[2506]: E0129 16:18:16.476662 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.577129 kubelet[2506]: E0129 16:18:16.577064 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.677663 kubelet[2506]: E0129 16:18:16.677610 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.778269 kubelet[2506]: E0129 16:18:16.778237 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.879270 kubelet[2506]: E0129 16:18:16.879192 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:16.979521 kubelet[2506]: E0129 16:18:16.979496 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:17.039336 systemd[1]: Reload requested from client PID 2776 ('systemctl') (unit session-9.scope)... Jan 29 16:18:17.039350 systemd[1]: Reloading... Jan 29 16:18:17.080119 kubelet[2506]: E0129 16:18:17.080097 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:17.120574 zram_generator::config[2824]: No configuration found. Jan 29 16:18:17.181164 kubelet[2506]: E0129 16:18:17.180909 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:17.189868 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 16:18:17.207936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:18:17.281817 kubelet[2506]: E0129 16:18:17.281796 2506 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:18:17.282745 systemd[1]: Reloading finished in 243 ms. Jan 29 16:18:17.302775 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:17.314394 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:18:17.314603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:17.314646 systemd[1]: kubelet.service: Consumed 487ms CPU time, 112.4M memory peak. Jan 29 16:18:17.324746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:17.919276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:17.924122 (kubelet)[2887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:18:18.141699 kubelet[2887]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:18:18.142406 kubelet[2887]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:18:18.142406 kubelet[2887]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:18:18.142406 kubelet[2887]: I0129 16:18:18.141960 2887 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:18:18.144476 kubelet[2887]: I0129 16:18:18.144465 2887 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:18:18.144527 kubelet[2887]: I0129 16:18:18.144516 2887 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:18:18.144711 kubelet[2887]: I0129 16:18:18.144703 2887 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:18:18.145449 kubelet[2887]: I0129 16:18:18.145441 2887 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:18:18.178886 kubelet[2887]: I0129 16:18:18.178818 2887 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:18:18.204277 kubelet[2887]: I0129 16:18:18.204257 2887 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:18:18.204642 kubelet[2887]: I0129 16:18:18.204612 2887 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:18:18.204851 kubelet[2887]: I0129 16:18:18.204698 2887 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:18:18.205087 kubelet[2887]: I0129 16:18:18.204955 2887 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:18:18.205087 kubelet[2887]: I0129 16:18:18.204969 2887 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:18:18.227728 kubelet[2887]: I0129 16:18:18.226743 2887 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:18:18.227728 kubelet[2887]: I0129 16:18:18.226857 2887 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:18:18.227728 kubelet[2887]: I0129 16:18:18.226869 2887 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:18:18.227728 kubelet[2887]: I0129 16:18:18.226889 2887 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:18:18.227728 kubelet[2887]: I0129 16:18:18.226899 2887 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:18:18.232102 kubelet[2887]: I0129 16:18:18.228312 2887 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:18:18.232102 kubelet[2887]: I0129 16:18:18.228441 2887 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:18:18.232102 kubelet[2887]: I0129 16:18:18.228732 2887 server.go:1264] "Started kubelet" Jan 29 16:18:18.232102 kubelet[2887]: I0129 16:18:18.231201 2887 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:18:18.232102 kubelet[2887]: I0129 16:18:18.231353 2887 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:18:18.232102 kubelet[2887]: I0129 16:18:18.231372 2887 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:18:18.232287 kubelet[2887]: I0129 16:18:18.232236 2887 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:18:18.233383 kubelet[2887]: I0129 16:18:18.232805 2887 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:18:18.238048 kubelet[2887]: E0129 16:18:18.238028 2887 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:18:18.244825 kubelet[2887]: I0129 16:18:18.244743 2887 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:18:18.244908 kubelet[2887]: I0129 16:18:18.244858 2887 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:18:18.244982 kubelet[2887]: I0129 16:18:18.244970 2887 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:18:18.256134 kubelet[2887]: I0129 16:18:18.255872 2887 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:18:18.256134 kubelet[2887]: I0129 16:18:18.255929 2887 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:18:18.258254 kubelet[2887]: I0129 16:18:18.257870 2887 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:18:18.265296 kubelet[2887]: I0129 16:18:18.265261 2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:18:18.266139 kubelet[2887]: I0129 16:18:18.266124 2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:18:18.266190 kubelet[2887]: I0129 16:18:18.266142 2887 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:18:18.266190 kubelet[2887]: I0129 16:18:18.266157 2887 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:18:18.266190 kubelet[2887]: E0129 16:18:18.266183 2887 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:18:18.266849 sudo[2902]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:18:18.267325 sudo[2902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:18:18.292740 kubelet[2887]: I0129 16:18:18.292711 2887 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:18:18.292740 kubelet[2887]: I0129 16:18:18.292722 2887 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:18:18.292740 kubelet[2887]: I0129 16:18:18.292733 2887 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:18:18.292869 kubelet[2887]: I0129 16:18:18.292830 2887 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:18:18.292869 kubelet[2887]: I0129 16:18:18.292839 2887 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:18:18.292869 kubelet[2887]: I0129 16:18:18.292855 2887 policy_none.go:49] "None policy: Start" Jan 29 16:18:18.293396 kubelet[2887]: I0129 16:18:18.293383 2887 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:18:18.293396 kubelet[2887]: I0129 16:18:18.293395 2887 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:18:18.293470 kubelet[2887]: I0129 16:18:18.293464 2887 state_mem.go:75] "Updated machine memory state" Jan 29 16:18:18.297951 kubelet[2887]: I0129 16:18:18.297342 2887 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:18:18.297951 kubelet[2887]: I0129 16:18:18.297439 2887 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:18:18.297951 kubelet[2887]: I0129 16:18:18.297507 2887 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:18:18.346882 kubelet[2887]: I0129 16:18:18.346865 2887 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 16:18:18.366641 kubelet[2887]: I0129 16:18:18.366614 2887 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 16:18:18.367176 kubelet[2887]: I0129 16:18:18.367165 2887 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 16:18:18.367344 kubelet[2887]: I0129 16:18:18.367333 2887 topology_manager.go:215] "Topology Admit Handler" podUID="3c741630038eabec7021a49e217952a1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 16:18:18.391503 kubelet[2887]: I0129 16:18:18.391445 2887 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 16:18:18.391840 kubelet[2887]: I0129 16:18:18.391691 2887 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 16:18:18.546739 kubelet[2887]: I0129 16:18:18.546513 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c741630038eabec7021a49e217952a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c741630038eabec7021a49e217952a1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:18:18.546739 kubelet[2887]: I0129 16:18:18.546542 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:18.546739 kubelet[2887]: I0129 16:18:18.546567 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:18.546739 kubelet[2887]: I0129 16:18:18.546588 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:18.546739 kubelet[2887]: I0129 16:18:18.546605 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:18:18.547020 kubelet[2887]: I0129 16:18:18.546618 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:18.547020 kubelet[2887]: I0129 16:18:18.546630 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:18.547020 kubelet[2887]: I0129 16:18:18.546640 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c741630038eabec7021a49e217952a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c741630038eabec7021a49e217952a1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:18:18.547020 kubelet[2887]: I0129 16:18:18.546651 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c741630038eabec7021a49e217952a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c741630038eabec7021a49e217952a1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:18:18.866915 sudo[2902]: pam_unix(sudo:session): session closed for user root Jan 29 16:18:19.231483 kubelet[2887]: I0129 16:18:19.231461 2887 apiserver.go:52] "Watching apiserver" Jan 29 16:18:19.245939 kubelet[2887]: I0129 16:18:19.245907 2887 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:18:19.294818 kubelet[2887]: E0129 16:18:19.294782 2887 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 16:18:19.296094 kubelet[2887]: E0129 16:18:19.296077 2887 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:18:19.311295 kubelet[2887]: I0129 16:18:19.311255 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.311242622 podStartE2EDuration="1.311242622s" podCreationTimestamp="2025-01-29 16:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:18:19.305912362 +0000 UTC m=+1.213988507" watchObservedRunningTime="2025-01-29 16:18:19.311242622 +0000 UTC m=+1.219318759" Jan 29 16:18:19.315672 kubelet[2887]: I0129 16:18:19.315636 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.315612066 podStartE2EDuration="1.315612066s" podCreationTimestamp="2025-01-29 16:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:18:19.311564736 +0000 UTC m=+1.219640871" watchObservedRunningTime="2025-01-29 16:18:19.315612066 +0000 UTC m=+1.223688209" Jan 29 16:18:19.322802 kubelet[2887]: I0129 16:18:19.322742 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.322730487 podStartE2EDuration="1.322730487s" podCreationTimestamp="2025-01-29 16:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:18:19.316016986 +0000 UTC m=+1.224093129" watchObservedRunningTime="2025-01-29 16:18:19.322730487 +0000 UTC m=+1.230806632" Jan 29 16:18:21.175380 sudo[1883]: pam_unix(sudo:session): session closed for user root Jan 29 16:18:21.176702 sshd[1882]: Connection closed by 139.178.89.65 port 43382 Jan 29 16:18:21.194180 sshd-session[1879]: pam_unix(sshd:session): session closed for user core Jan 29 16:18:21.196256 systemd[1]: sshd@6-139.178.70.99:22-139.178.89.65:43382.service: Deactivated successfully. Jan 29 16:18:21.198059 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:18:21.198268 systemd[1]: session-9.scope: Consumed 3.177s CPU time, 232.2M memory peak. Jan 29 16:18:21.200483 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:18:21.201222 systemd-logind[1549]: Removed session 9. Jan 29 16:18:32.838489 kubelet[2887]: I0129 16:18:32.837770 2887 topology_manager.go:215] "Topology Admit Handler" podUID="c3e89957-4c7c-42a7-bb3c-d9a5c4406129" podNamespace="kube-system" podName="kube-proxy-6d8z5" Jan 29 16:18:32.846441 systemd[1]: Created slice kubepods-besteffort-podc3e89957_4c7c_42a7_bb3c_d9a5c4406129.slice - libcontainer container kubepods-besteffort-podc3e89957_4c7c_42a7_bb3c_d9a5c4406129.slice. Jan 29 16:18:32.858741 kubelet[2887]: I0129 16:18:32.857589 2887 topology_manager.go:215] "Topology Admit Handler" podUID="39785176-f5d8-401f-82c9-a03a804d4538" podNamespace="kube-system" podName="cilium-9fwlj" Jan 29 16:18:32.862355 systemd[1]: Created slice kubepods-burstable-pod39785176_f5d8_401f_82c9_a03a804d4538.slice - libcontainer container kubepods-burstable-pod39785176_f5d8_401f_82c9_a03a804d4538.slice. Jan 29 16:18:32.866603 kubelet[2887]: I0129 16:18:32.866587 2887 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:18:32.866927 containerd[1573]: time="2025-01-29T16:18:32.866905187Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:18:32.867226 kubelet[2887]: I0129 16:18:32.867214 2887 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:18:32.896017 kubelet[2887]: I0129 16:18:32.895641 2887 topology_manager.go:215] "Topology Admit Handler" podUID="152c5eba-02f2-4052-8131-dc96254cb7fe" podNamespace="kube-system" podName="cilium-operator-599987898-dhpcc" Jan 29 16:18:32.901327 systemd[1]: Created slice kubepods-besteffort-pod152c5eba_02f2_4052_8131_dc96254cb7fe.slice - libcontainer container kubepods-besteffort-pod152c5eba_02f2_4052_8131_dc96254cb7fe.slice. Jan 29 16:18:33.032321 kubelet[2887]: I0129 16:18:33.032289 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/152c5eba-02f2-4052-8131-dc96254cb7fe-cilium-config-path\") pod \"cilium-operator-599987898-dhpcc\" (UID: \"152c5eba-02f2-4052-8131-dc96254cb7fe\") " pod="kube-system/cilium-operator-599987898-dhpcc" Jan 29 16:18:33.032321 kubelet[2887]: I0129 16:18:33.032323 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5c2c\" (UniqueName: \"kubernetes.io/projected/152c5eba-02f2-4052-8131-dc96254cb7fe-kube-api-access-r5c2c\") pod \"cilium-operator-599987898-dhpcc\" (UID: \"152c5eba-02f2-4052-8131-dc96254cb7fe\") " pod="kube-system/cilium-operator-599987898-dhpcc" Jan 29 16:18:33.032436 kubelet[2887]: I0129 16:18:33.032339 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-etc-cni-netd\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032436 kubelet[2887]: I0129 16:18:33.032369 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3e89957-4c7c-42a7-bb3c-d9a5c4406129-xtables-lock\") pod \"kube-proxy-6d8z5\" (UID: \"c3e89957-4c7c-42a7-bb3c-d9a5c4406129\") " pod="kube-system/kube-proxy-6d8z5" Jan 29 16:18:33.032436 kubelet[2887]: I0129 16:18:33.032378 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e89957-4c7c-42a7-bb3c-d9a5c4406129-lib-modules\") pod \"kube-proxy-6d8z5\" (UID: \"c3e89957-4c7c-42a7-bb3c-d9a5c4406129\") " pod="kube-system/kube-proxy-6d8z5" Jan 29 16:18:33.032436 kubelet[2887]: I0129 16:18:33.032387 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39785176-f5d8-401f-82c9-a03a804d4538-clustermesh-secrets\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032436 kubelet[2887]: I0129 16:18:33.032396 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-hostproc\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032436 kubelet[2887]: I0129 16:18:33.032403 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cni-path\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032578 kubelet[2887]: I0129 16:18:33.032412 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-host-proc-sys-kernel\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032578 kubelet[2887]: I0129 16:18:33.032421 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslsv\" (UniqueName: \"kubernetes.io/projected/39785176-f5d8-401f-82c9-a03a804d4538-kube-api-access-qslsv\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032578 kubelet[2887]: I0129 16:18:33.032430 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cilium-run\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032578 kubelet[2887]: I0129 16:18:33.032439 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-host-proc-sys-net\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032578 kubelet[2887]: I0129 16:18:33.032455 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2wz2\" (UniqueName: \"kubernetes.io/projected/c3e89957-4c7c-42a7-bb3c-d9a5c4406129-kube-api-access-p2wz2\") pod \"kube-proxy-6d8z5\" (UID: \"c3e89957-4c7c-42a7-bb3c-d9a5c4406129\") " pod="kube-system/kube-proxy-6d8z5" Jan 29 16:18:33.032674 kubelet[2887]: I0129 16:18:33.032464 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-bpf-maps\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032674 kubelet[2887]: I0129 16:18:33.032473 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39785176-f5d8-401f-82c9-a03a804d4538-cilium-config-path\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032674 kubelet[2887]: I0129 16:18:33.032485 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cilium-cgroup\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032674 kubelet[2887]: I0129 16:18:33.032498 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-lib-modules\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032674 kubelet[2887]: I0129 16:18:33.032510 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-xtables-lock\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032674 kubelet[2887]: I0129 16:18:33.032519 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39785176-f5d8-401f-82c9-a03a804d4538-hubble-tls\") pod \"cilium-9fwlj\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " pod="kube-system/cilium-9fwlj" Jan 29 16:18:33.032774 kubelet[2887]: I0129 16:18:33.032527 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3e89957-4c7c-42a7-bb3c-d9a5c4406129-kube-proxy\") pod \"kube-proxy-6d8z5\" (UID: \"c3e89957-4c7c-42a7-bb3c-d9a5c4406129\") " pod="kube-system/kube-proxy-6d8z5" Jan 29 16:18:33.155821 containerd[1573]: time="2025-01-29T16:18:33.154259381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6d8z5,Uid:c3e89957-4c7c-42a7-bb3c-d9a5c4406129,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:33.167380 containerd[1573]: time="2025-01-29T16:18:33.167162244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9fwlj,Uid:39785176-f5d8-401f-82c9-a03a804d4538,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:33.176975 containerd[1573]: time="2025-01-29T16:18:33.176762798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:33.177362 containerd[1573]: time="2025-01-29T16:18:33.176970808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:33.177362 containerd[1573]: time="2025-01-29T16:18:33.176982209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:33.177362 containerd[1573]: time="2025-01-29T16:18:33.177139758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:33.188732 systemd[1]: Started cri-containerd-ba4ca83ecb1bd038c65f088eb75e9d1b604e25a9f85a017606d721fc57afabfc.scope - libcontainer container ba4ca83ecb1bd038c65f088eb75e9d1b604e25a9f85a017606d721fc57afabfc. Jan 29 16:18:33.192237 containerd[1573]: time="2025-01-29T16:18:33.192109526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:33.192237 containerd[1573]: time="2025-01-29T16:18:33.192142430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:33.192437 containerd[1573]: time="2025-01-29T16:18:33.192149639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:33.192437 containerd[1573]: time="2025-01-29T16:18:33.192287667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:33.204535 containerd[1573]: time="2025-01-29T16:18:33.204465928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dhpcc,Uid:152c5eba-02f2-4052-8131-dc96254cb7fe,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:33.207716 systemd[1]: Started cri-containerd-427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537.scope - libcontainer container 427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537. Jan 29 16:18:33.214257 containerd[1573]: time="2025-01-29T16:18:33.214208470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6d8z5,Uid:c3e89957-4c7c-42a7-bb3c-d9a5c4406129,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba4ca83ecb1bd038c65f088eb75e9d1b604e25a9f85a017606d721fc57afabfc\"" Jan 29 16:18:33.224224 containerd[1573]: time="2025-01-29T16:18:33.224204640Z" level=info msg="CreateContainer within sandbox \"ba4ca83ecb1bd038c65f088eb75e9d1b604e25a9f85a017606d721fc57afabfc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:18:33.239403 containerd[1573]: time="2025-01-29T16:18:33.239202376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:33.239403 containerd[1573]: time="2025-01-29T16:18:33.239233317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:33.239403 containerd[1573]: time="2025-01-29T16:18:33.239240507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:33.239403 containerd[1573]: time="2025-01-29T16:18:33.239280934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:33.240835 containerd[1573]: time="2025-01-29T16:18:33.240705573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9fwlj,Uid:39785176-f5d8-401f-82c9-a03a804d4538,Namespace:kube-system,Attempt:0,} returns sandbox id \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\"" Jan 29 16:18:33.241314 containerd[1573]: time="2025-01-29T16:18:33.241206965Z" level=info msg="CreateContainer within sandbox \"ba4ca83ecb1bd038c65f088eb75e9d1b604e25a9f85a017606d721fc57afabfc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b486394ebce4b070fe080ee7271b2349023348dce1fb5f872fdc4431d9143440\"" Jan 29 16:18:33.243474 containerd[1573]: time="2025-01-29T16:18:33.242672465Z" level=info msg="StartContainer for \"b486394ebce4b070fe080ee7271b2349023348dce1fb5f872fdc4431d9143440\"" Jan 29 16:18:33.246044 containerd[1573]: time="2025-01-29T16:18:33.245860046Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:18:33.256835 systemd[1]: Started cri-containerd-531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac.scope - libcontainer container 531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac. Jan 29 16:18:33.275680 systemd[1]: Started cri-containerd-b486394ebce4b070fe080ee7271b2349023348dce1fb5f872fdc4431d9143440.scope - libcontainer container b486394ebce4b070fe080ee7271b2349023348dce1fb5f872fdc4431d9143440. Jan 29 16:18:33.298353 containerd[1573]: time="2025-01-29T16:18:33.298328708Z" level=info msg="StartContainer for \"b486394ebce4b070fe080ee7271b2349023348dce1fb5f872fdc4431d9143440\" returns successfully" Jan 29 16:18:33.323632 containerd[1573]: time="2025-01-29T16:18:33.323582176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dhpcc,Uid:152c5eba-02f2-4052-8131-dc96254cb7fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac\"" Jan 29 16:18:34.309064 kubelet[2887]: I0129 16:18:34.309027 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6d8z5" podStartSLOduration=2.309014805 podStartE2EDuration="2.309014805s" podCreationTimestamp="2025-01-29 16:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:18:34.308930718 +0000 UTC m=+16.217006857" watchObservedRunningTime="2025-01-29 16:18:34.309014805 +0000 UTC m=+16.217090943" Jan 29 16:18:37.519974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2502485887.mount: Deactivated successfully. Jan 29 16:18:39.989019 containerd[1573]: time="2025-01-29T16:18:39.988900608Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:39.991342 containerd[1573]: time="2025-01-29T16:18:39.991298747Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:18:39.996971 containerd[1573]: time="2025-01-29T16:18:39.996913189Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:39.998442 containerd[1573]: time="2025-01-29T16:18:39.998315799Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.752431435s" Jan 29 16:18:39.998442 containerd[1573]: time="2025-01-29T16:18:39.998352203Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:18:40.001470 containerd[1573]: time="2025-01-29T16:18:39.999899001Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:18:40.004133 containerd[1573]: time="2025-01-29T16:18:40.004102519Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:18:40.067386 containerd[1573]: time="2025-01-29T16:18:40.067357392Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\"" Jan 29 16:18:40.067907 containerd[1573]: time="2025-01-29T16:18:40.067851390Z" level=info msg="StartContainer for \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\"" Jan 29 16:18:40.201644 systemd[1]: Started cri-containerd-41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42.scope - libcontainer container 41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42. Jan 29 16:18:40.219441 containerd[1573]: time="2025-01-29T16:18:40.219417372Z" level=info msg="StartContainer for \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\" returns successfully" Jan 29 16:18:40.227655 systemd[1]: cri-containerd-41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42.scope: Deactivated successfully. Jan 29 16:18:40.258104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42-rootfs.mount: Deactivated successfully. Jan 29 16:18:40.396439 containerd[1573]: time="2025-01-29T16:18:40.385002819Z" level=info msg="shim disconnected" id=41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42 namespace=k8s.io Jan 29 16:18:40.396439 containerd[1573]: time="2025-01-29T16:18:40.396367743Z" level=warning msg="cleaning up after shim disconnected" id=41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42 namespace=k8s.io Jan 29 16:18:40.396439 containerd[1573]: time="2025-01-29T16:18:40.396381877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:18:41.407402 containerd[1573]: time="2025-01-29T16:18:41.407330891Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:18:41.465565 containerd[1573]: time="2025-01-29T16:18:41.465526152Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\"" Jan 29 16:18:41.470490 containerd[1573]: time="2025-01-29T16:18:41.466101186Z" level=info msg="StartContainer for \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\"" Jan 29 16:18:41.489758 systemd[1]: Started cri-containerd-324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613.scope - libcontainer container 324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613. Jan 29 16:18:41.514146 containerd[1573]: time="2025-01-29T16:18:41.514114064Z" level=info msg="StartContainer for \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\" returns successfully" Jan 29 16:18:41.523275 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:18:41.523456 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:18:41.523813 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:18:41.526769 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:18:41.527972 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:18:41.528269 systemd[1]: cri-containerd-324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613.scope: Deactivated successfully. Jan 29 16:18:41.552438 containerd[1573]: time="2025-01-29T16:18:41.552366529Z" level=info msg="shim disconnected" id=324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613 namespace=k8s.io Jan 29 16:18:41.552838 containerd[1573]: time="2025-01-29T16:18:41.552421793Z" level=warning msg="cleaning up after shim disconnected" id=324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613 namespace=k8s.io Jan 29 16:18:41.552838 containerd[1573]: time="2025-01-29T16:18:41.552685885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:18:41.585735 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:18:42.411153 containerd[1573]: time="2025-01-29T16:18:42.410799903Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:18:42.447361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613-rootfs.mount: Deactivated successfully. Jan 29 16:18:42.460509 containerd[1573]: time="2025-01-29T16:18:42.460417338Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\"" Jan 29 16:18:42.461553 containerd[1573]: time="2025-01-29T16:18:42.461525900Z" level=info msg="StartContainer for \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\"" Jan 29 16:18:42.489769 systemd[1]: Started cri-containerd-67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00.scope - libcontainer container 67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00. Jan 29 16:18:42.512616 containerd[1573]: time="2025-01-29T16:18:42.512522003Z" level=info msg="StartContainer for \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\" returns successfully" Jan 29 16:18:42.513689 systemd[1]: cri-containerd-67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00.scope: Deactivated successfully. Jan 29 16:18:42.547463 containerd[1573]: time="2025-01-29T16:18:42.547421277Z" level=info msg="shim disconnected" id=67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00 namespace=k8s.io Jan 29 16:18:42.547724 containerd[1573]: time="2025-01-29T16:18:42.547616714Z" level=warning msg="cleaning up after shim disconnected" id=67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00 namespace=k8s.io Jan 29 16:18:42.547724 containerd[1573]: time="2025-01-29T16:18:42.547627829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:18:43.414750 containerd[1573]: time="2025-01-29T16:18:43.414641009Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:18:43.442550 containerd[1573]: time="2025-01-29T16:18:43.442348219Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\"" Jan 29 16:18:43.444139 containerd[1573]: time="2025-01-29T16:18:43.443833441Z" level=info msg="StartContainer for \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\"" Jan 29 16:18:43.447638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00-rootfs.mount: Deactivated successfully. Jan 29 16:18:43.481755 systemd[1]: Started cri-containerd-f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe.scope - libcontainer container f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe. Jan 29 16:18:43.504639 systemd[1]: cri-containerd-f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe.scope: Deactivated successfully. Jan 29 16:18:43.511460 containerd[1573]: time="2025-01-29T16:18:43.511421326Z" level=info msg="StartContainer for \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\" returns successfully" Jan 29 16:18:43.527189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe-rootfs.mount: Deactivated successfully. Jan 29 16:18:43.572254 containerd[1573]: time="2025-01-29T16:18:43.572205343Z" level=info msg="shim disconnected" id=f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe namespace=k8s.io Jan 29 16:18:43.572254 containerd[1573]: time="2025-01-29T16:18:43.572244560Z" level=warning msg="cleaning up after shim disconnected" id=f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe namespace=k8s.io Jan 29 16:18:43.572254 containerd[1573]: time="2025-01-29T16:18:43.572250120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:18:44.428109 containerd[1573]: time="2025-01-29T16:18:44.428082376Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:18:44.606911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2062594150.mount: Deactivated successfully. Jan 29 16:18:44.677077 containerd[1573]: time="2025-01-29T16:18:44.677040559Z" level=info msg="CreateContainer within sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\"" Jan 29 16:18:44.678466 containerd[1573]: time="2025-01-29T16:18:44.677669854Z" level=info msg="StartContainer for \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\"" Jan 29 16:18:44.700999 systemd[1]: Started cri-containerd-9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91.scope - libcontainer container 9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91. Jan 29 16:18:44.741255 containerd[1573]: time="2025-01-29T16:18:44.741216528Z" level=info msg="StartContainer for \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\" returns successfully" Jan 29 16:18:45.106173 kubelet[2887]: I0129 16:18:45.105950 2887 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:18:45.576358 kubelet[2887]: I0129 16:18:45.572315 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9fwlj" podStartSLOduration=6.816713899 podStartE2EDuration="13.572023729s" podCreationTimestamp="2025-01-29 16:18:32 +0000 UTC" firstStartedPulling="2025-01-29 16:18:33.244192334 +0000 UTC m=+15.152268468" lastFinishedPulling="2025-01-29 16:18:39.999502164 +0000 UTC m=+21.907578298" observedRunningTime="2025-01-29 16:18:45.565252526 +0000 UTC m=+27.473328670" watchObservedRunningTime="2025-01-29 16:18:45.572023729 +0000 UTC m=+27.480099866" Jan 29 16:18:45.578236 containerd[1573]: time="2025-01-29T16:18:45.577370290Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:45.578672 containerd[1573]: time="2025-01-29T16:18:45.578639317Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:18:45.578987 kubelet[2887]: I0129 16:18:45.578936 2887 topology_manager.go:215] "Topology Admit Handler" podUID="0db96dd7-2616-4644-b102-1b1bbc27bd77" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7x5fs" Jan 29 16:18:45.579501 containerd[1573]: time="2025-01-29T16:18:45.579362005Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:45.580994 containerd[1573]: time="2025-01-29T16:18:45.580970157Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.581049082s" Jan 29 16:18:45.581037 containerd[1573]: time="2025-01-29T16:18:45.580998216Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:18:45.594807 kubelet[2887]: I0129 16:18:45.594703 2887 topology_manager.go:215] "Topology Admit Handler" podUID="87bfee5a-f1be-4ae7-9ee5-39ea9c39f969" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ggdwt" Jan 29 16:18:45.604964 containerd[1573]: time="2025-01-29T16:18:45.604866339Z" level=info msg="CreateContainer within sandbox \"531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:18:45.614950 systemd[1]: Created slice kubepods-burstable-pod87bfee5a_f1be_4ae7_9ee5_39ea9c39f969.slice - libcontainer container kubepods-burstable-pod87bfee5a_f1be_4ae7_9ee5_39ea9c39f969.slice. Jan 29 16:18:45.630836 containerd[1573]: time="2025-01-29T16:18:45.630810622Z" level=info msg="CreateContainer within sandbox \"531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\"" Jan 29 16:18:45.631634 containerd[1573]: time="2025-01-29T16:18:45.631104678Z" level=info msg="StartContainer for \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\"" Jan 29 16:18:45.635053 systemd[1]: Created slice kubepods-burstable-pod0db96dd7_2616_4644_b102_1b1bbc27bd77.slice - libcontainer container kubepods-burstable-pod0db96dd7_2616_4644_b102_1b1bbc27bd77.slice. Jan 29 16:18:45.653682 systemd[1]: Started cri-containerd-f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec.scope - libcontainer container f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec. Jan 29 16:18:45.675013 containerd[1573]: time="2025-01-29T16:18:45.674927025Z" level=info msg="StartContainer for \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\" returns successfully" Jan 29 16:18:45.687455 kubelet[2887]: I0129 16:18:45.686995 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0db96dd7-2616-4644-b102-1b1bbc27bd77-config-volume\") pod \"coredns-7db6d8ff4d-7x5fs\" (UID: \"0db96dd7-2616-4644-b102-1b1bbc27bd77\") " pod="kube-system/coredns-7db6d8ff4d-7x5fs" Jan 29 16:18:45.687455 kubelet[2887]: I0129 16:18:45.687023 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87bfee5a-f1be-4ae7-9ee5-39ea9c39f969-config-volume\") pod \"coredns-7db6d8ff4d-ggdwt\" (UID: \"87bfee5a-f1be-4ae7-9ee5-39ea9c39f969\") " pod="kube-system/coredns-7db6d8ff4d-ggdwt" Jan 29 16:18:45.687455 kubelet[2887]: I0129 16:18:45.687039 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsw2z\" (UniqueName: \"kubernetes.io/projected/87bfee5a-f1be-4ae7-9ee5-39ea9c39f969-kube-api-access-lsw2z\") pod \"coredns-7db6d8ff4d-ggdwt\" (UID: \"87bfee5a-f1be-4ae7-9ee5-39ea9c39f969\") " pod="kube-system/coredns-7db6d8ff4d-ggdwt" Jan 29 16:18:45.687455 kubelet[2887]: I0129 16:18:45.687051 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvwkx\" (UniqueName: \"kubernetes.io/projected/0db96dd7-2616-4644-b102-1b1bbc27bd77-kube-api-access-vvwkx\") pod \"coredns-7db6d8ff4d-7x5fs\" (UID: \"0db96dd7-2616-4644-b102-1b1bbc27bd77\") " pod="kube-system/coredns-7db6d8ff4d-7x5fs" Jan 29 16:18:45.934922 containerd[1573]: time="2025-01-29T16:18:45.934825750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ggdwt,Uid:87bfee5a-f1be-4ae7-9ee5-39ea9c39f969,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:45.941389 containerd[1573]: time="2025-01-29T16:18:45.941348420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7x5fs,Uid:0db96dd7-2616-4644-b102-1b1bbc27bd77,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:46.488254 kubelet[2887]: I0129 16:18:46.487920 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-dhpcc" podStartSLOduration=2.224699636 podStartE2EDuration="14.487909047s" podCreationTimestamp="2025-01-29 16:18:32 +0000 UTC" firstStartedPulling="2025-01-29 16:18:33.324741642 +0000 UTC m=+15.232817782" lastFinishedPulling="2025-01-29 16:18:45.587951059 +0000 UTC m=+27.496027193" observedRunningTime="2025-01-29 16:18:46.487566044 +0000 UTC m=+28.395642178" watchObservedRunningTime="2025-01-29 16:18:46.487909047 +0000 UTC m=+28.395985185" Jan 29 16:18:49.731445 systemd-networkd[1478]: cilium_host: Link UP Jan 29 16:18:49.732079 systemd-networkd[1478]: cilium_net: Link UP Jan 29 16:18:49.732513 systemd-networkd[1478]: cilium_net: Gained carrier Jan 29 16:18:49.733063 systemd-networkd[1478]: cilium_host: Gained carrier Jan 29 16:18:49.913338 systemd-networkd[1478]: cilium_vxlan: Link UP Jan 29 16:18:49.913342 systemd-networkd[1478]: cilium_vxlan: Gained carrier Jan 29 16:18:50.344666 systemd-networkd[1478]: cilium_host: Gained IPv6LL Jan 29 16:18:50.407684 systemd-networkd[1478]: cilium_net: Gained IPv6LL Jan 29 16:18:50.636573 kernel: NET: Registered PF_ALG protocol family Jan 29 16:18:50.920831 systemd-networkd[1478]: cilium_vxlan: Gained IPv6LL Jan 29 16:18:51.055063 systemd-networkd[1478]: lxc_health: Link UP Jan 29 16:18:51.061468 systemd-networkd[1478]: lxc_health: Gained carrier Jan 29 16:18:51.547966 systemd-networkd[1478]: lxc9dc66e83ff8d: Link UP Jan 29 16:18:51.550573 kernel: eth0: renamed from tmpf2d78 Jan 29 16:18:51.555040 systemd-networkd[1478]: lxc669551e6591f: Link UP Jan 29 16:18:51.561335 systemd-networkd[1478]: lxc9dc66e83ff8d: Gained carrier Jan 29 16:18:51.564612 kernel: eth0: renamed from tmp1dfba Jan 29 16:18:51.568702 systemd-networkd[1478]: lxc669551e6591f: Gained carrier Jan 29 16:18:52.327897 systemd-networkd[1478]: lxc_health: Gained IPv6LL Jan 29 16:18:52.967663 systemd-networkd[1478]: lxc669551e6591f: Gained IPv6LL Jan 29 16:18:53.225661 systemd-networkd[1478]: lxc9dc66e83ff8d: Gained IPv6LL Jan 29 16:18:54.218656 containerd[1573]: time="2025-01-29T16:18:54.215482132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:54.218656 containerd[1573]: time="2025-01-29T16:18:54.215805247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:54.218656 containerd[1573]: time="2025-01-29T16:18:54.215833006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:54.218656 containerd[1573]: time="2025-01-29T16:18:54.215964174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:54.236300 containerd[1573]: time="2025-01-29T16:18:54.235145124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:54.236300 containerd[1573]: time="2025-01-29T16:18:54.235189873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:54.236300 containerd[1573]: time="2025-01-29T16:18:54.235200280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:54.236300 containerd[1573]: time="2025-01-29T16:18:54.235251311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:54.256664 systemd[1]: Started cri-containerd-1dfba6e9fcaa553564da49af501662e9c377200c0b873df97535bf09685fb32a.scope - libcontainer container 1dfba6e9fcaa553564da49af501662e9c377200c0b873df97535bf09685fb32a. Jan 29 16:18:54.257914 systemd[1]: Started cri-containerd-f2d78dd833b55c38c3aad231af591e31f7f167f9947f2c74c43037d3a239f8a1.scope - libcontainer container f2d78dd833b55c38c3aad231af591e31f7f167f9947f2c74c43037d3a239f8a1. Jan 29 16:18:54.269961 systemd-resolved[1479]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:18:54.276550 systemd-resolved[1479]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:18:54.307436 containerd[1573]: time="2025-01-29T16:18:54.306671282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ggdwt,Uid:87bfee5a-f1be-4ae7-9ee5-39ea9c39f969,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2d78dd833b55c38c3aad231af591e31f7f167f9947f2c74c43037d3a239f8a1\"" Jan 29 16:18:54.319832 containerd[1573]: time="2025-01-29T16:18:54.319809939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7x5fs,Uid:0db96dd7-2616-4644-b102-1b1bbc27bd77,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dfba6e9fcaa553564da49af501662e9c377200c0b873df97535bf09685fb32a\"" Jan 29 16:18:54.379732 containerd[1573]: time="2025-01-29T16:18:54.379700432Z" level=info msg="CreateContainer within sandbox \"1dfba6e9fcaa553564da49af501662e9c377200c0b873df97535bf09685fb32a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:18:54.380271 containerd[1573]: time="2025-01-29T16:18:54.380257534Z" level=info msg="CreateContainer within sandbox \"f2d78dd833b55c38c3aad231af591e31f7f167f9947f2c74c43037d3a239f8a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:18:54.559310 containerd[1573]: time="2025-01-29T16:18:54.559238013Z" level=info msg="CreateContainer within sandbox \"f2d78dd833b55c38c3aad231af591e31f7f167f9947f2c74c43037d3a239f8a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62cfb0390d4ebeafd136e0cb0a24656fa1531b537aa09e7fee82177826b1a7ca\"" Jan 29 16:18:54.560401 containerd[1573]: time="2025-01-29T16:18:54.559860746Z" level=info msg="StartContainer for \"62cfb0390d4ebeafd136e0cb0a24656fa1531b537aa09e7fee82177826b1a7ca\"" Jan 29 16:18:54.578699 systemd[1]: Started cri-containerd-62cfb0390d4ebeafd136e0cb0a24656fa1531b537aa09e7fee82177826b1a7ca.scope - libcontainer container 62cfb0390d4ebeafd136e0cb0a24656fa1531b537aa09e7fee82177826b1a7ca. Jan 29 16:18:54.601145 containerd[1573]: time="2025-01-29T16:18:54.601123005Z" level=info msg="CreateContainer within sandbox \"1dfba6e9fcaa553564da49af501662e9c377200c0b873df97535bf09685fb32a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f63e41caa7e49cf592e75689f1d4084fd9397d3b441e8a11ad92a7b5b7192227\"" Jan 29 16:18:54.602384 containerd[1573]: time="2025-01-29T16:18:54.602362912Z" level=info msg="StartContainer for \"f63e41caa7e49cf592e75689f1d4084fd9397d3b441e8a11ad92a7b5b7192227\"" Jan 29 16:18:54.622645 systemd[1]: Started cri-containerd-f63e41caa7e49cf592e75689f1d4084fd9397d3b441e8a11ad92a7b5b7192227.scope - libcontainer container f63e41caa7e49cf592e75689f1d4084fd9397d3b441e8a11ad92a7b5b7192227. Jan 29 16:18:54.721165 containerd[1573]: time="2025-01-29T16:18:54.721126896Z" level=info msg="StartContainer for \"f63e41caa7e49cf592e75689f1d4084fd9397d3b441e8a11ad92a7b5b7192227\" returns successfully" Jan 29 16:18:54.721306 containerd[1573]: time="2025-01-29T16:18:54.721129952Z" level=info msg="StartContainer for \"62cfb0390d4ebeafd136e0cb0a24656fa1531b537aa09e7fee82177826b1a7ca\" returns successfully" Jan 29 16:18:54.775687 kubelet[2887]: I0129 16:18:54.775534 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7x5fs" podStartSLOduration=22.775518428 podStartE2EDuration="22.775518428s" podCreationTimestamp="2025-01-29 16:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:18:54.775483919 +0000 UTC m=+36.683560062" watchObservedRunningTime="2025-01-29 16:18:54.775518428 +0000 UTC m=+36.683594572" Jan 29 16:18:54.786828 kubelet[2887]: I0129 16:18:54.786770 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ggdwt" podStartSLOduration=22.786549386 podStartE2EDuration="22.786549386s" podCreationTimestamp="2025-01-29 16:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:18:54.78608267 +0000 UTC m=+36.694158815" watchObservedRunningTime="2025-01-29 16:18:54.786549386 +0000 UTC m=+36.694625528" Jan 29 16:18:55.224214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452100352.mount: Deactivated successfully. Jan 29 16:18:57.602750 systemd[1]: Started sshd@7-139.178.70.99:22-178.160.211.111:44322.service - OpenSSH per-connection server daemon (178.160.211.111:44322). Jan 29 16:18:57.779326 systemd[1]: Started sshd@8-139.178.70.99:22-178.160.211.111:44332.service - OpenSSH per-connection server daemon (178.160.211.111:44332). Jan 29 16:18:57.844638 sshd[4277]: Connection closed by 178.160.211.111 port 44322 Jan 29 16:18:57.845892 systemd[1]: sshd@7-139.178.70.99:22-178.160.211.111:44322.service: Deactivated successfully. Jan 29 16:18:58.849151 sshd[4279]: Connection closed by authenticating user root 178.160.211.111 port 44332 [preauth] Jan 29 16:18:58.850356 systemd[1]: sshd@8-139.178.70.99:22-178.160.211.111:44332.service: Deactivated successfully. Jan 29 16:18:59.061414 systemd[1]: Started sshd@9-139.178.70.99:22-178.160.211.111:36538.service - OpenSSH per-connection server daemon (178.160.211.111:36538). Jan 29 16:18:59.908424 sshd[4287]: Invalid user kafka from 178.160.211.111 port 36538 Jan 29 16:19:00.109446 sshd[4287]: Connection closed by invalid user kafka 178.160.211.111 port 36538 [preauth] Jan 29 16:19:00.110195 systemd[1]: sshd@9-139.178.70.99:22-178.160.211.111:36538.service: Deactivated successfully. Jan 29 16:19:00.321143 systemd[1]: Started sshd@10-139.178.70.99:22-178.160.211.111:36554.service - OpenSSH per-connection server daemon (178.160.211.111:36554). Jan 29 16:19:01.142643 sshd[4292]: Invalid user api from 178.160.211.111 port 36554 Jan 29 16:19:01.347878 sshd[4292]: Connection closed by invalid user api 178.160.211.111 port 36554 [preauth] Jan 29 16:19:01.348986 systemd[1]: sshd@10-139.178.70.99:22-178.160.211.111:36554.service: Deactivated successfully. Jan 29 16:19:01.559323 systemd[1]: Started sshd@11-139.178.70.99:22-178.160.211.111:36570.service - OpenSSH per-connection server daemon (178.160.211.111:36570). Jan 29 16:19:02.383513 sshd[4297]: Invalid user oracle from 178.160.211.111 port 36570 Jan 29 16:19:02.585440 sshd[4297]: Connection closed by invalid user oracle 178.160.211.111 port 36570 [preauth] Jan 29 16:19:02.587127 systemd[1]: sshd@11-139.178.70.99:22-178.160.211.111:36570.service: Deactivated successfully. Jan 29 16:19:02.799759 systemd[1]: Started sshd@12-139.178.70.99:22-178.160.211.111:36578.service - OpenSSH per-connection server daemon (178.160.211.111:36578). Jan 29 16:19:03.616885 sshd[4302]: Invalid user debian from 178.160.211.111 port 36578 Jan 29 16:19:03.818093 sshd[4302]: Connection closed by invalid user debian 178.160.211.111 port 36578 [preauth] Jan 29 16:19:03.819495 systemd[1]: sshd@12-139.178.70.99:22-178.160.211.111:36578.service: Deactivated successfully. Jan 29 16:19:04.029433 systemd[1]: Started sshd@13-139.178.70.99:22-178.160.211.111:36588.service - OpenSSH per-connection server daemon (178.160.211.111:36588). Jan 29 16:19:04.850667 sshd[4309]: Invalid user qemu from 178.160.211.111 port 36588 Jan 29 16:19:05.052150 sshd[4309]: Connection closed by invalid user qemu 178.160.211.111 port 36588 [preauth] Jan 29 16:19:05.053682 systemd[1]: sshd@13-139.178.70.99:22-178.160.211.111:36588.service: Deactivated successfully. Jan 29 16:19:05.276903 systemd[1]: Started sshd@14-139.178.70.99:22-178.160.211.111:36596.service - OpenSSH per-connection server daemon (178.160.211.111:36596). Jan 29 16:19:06.094488 sshd[4314]: Invalid user minecraft from 178.160.211.111 port 36596 Jan 29 16:19:06.295955 sshd[4314]: Connection closed by invalid user minecraft 178.160.211.111 port 36596 [preauth] Jan 29 16:19:06.297471 systemd[1]: sshd@14-139.178.70.99:22-178.160.211.111:36596.service: Deactivated successfully. Jan 29 16:19:06.506823 systemd[1]: Started sshd@15-139.178.70.99:22-178.160.211.111:36604.service - OpenSSH per-connection server daemon (178.160.211.111:36604). Jan 29 16:19:07.328206 sshd[4319]: Invalid user vyos from 178.160.211.111 port 36604 Jan 29 16:19:07.529328 sshd[4319]: Connection closed by invalid user vyos 178.160.211.111 port 36604 [preauth] Jan 29 16:19:07.530780 systemd[1]: sshd@15-139.178.70.99:22-178.160.211.111:36604.service: Deactivated successfully. Jan 29 16:19:07.740440 systemd[1]: Started sshd@16-139.178.70.99:22-178.160.211.111:36616.service - OpenSSH per-connection server daemon (178.160.211.111:36616). Jan 29 16:19:08.559753 sshd[4324]: Invalid user vyos from 178.160.211.111 port 36616 Jan 29 16:19:08.760641 sshd[4324]: Connection closed by invalid user vyos 178.160.211.111 port 36616 [preauth] Jan 29 16:19:08.762005 systemd[1]: sshd@16-139.178.70.99:22-178.160.211.111:36616.service: Deactivated successfully. Jan 29 16:19:08.979738 systemd[1]: Started sshd@17-139.178.70.99:22-178.160.211.111:49032.service - OpenSSH per-connection server daemon (178.160.211.111:49032). Jan 29 16:19:09.799893 sshd[4329]: Invalid user mcserver from 178.160.211.111 port 49032 Jan 29 16:19:10.001134 sshd[4329]: Connection closed by invalid user mcserver 178.160.211.111 port 49032 [preauth] Jan 29 16:19:10.002341 systemd[1]: sshd@17-139.178.70.99:22-178.160.211.111:49032.service: Deactivated successfully. Jan 29 16:19:10.209321 systemd[1]: Started sshd@18-139.178.70.99:22-178.160.211.111:49040.service - OpenSSH per-connection server daemon (178.160.211.111:49040). Jan 29 16:19:11.029424 sshd[4334]: Invalid user vyos from 178.160.211.111 port 49040 Jan 29 16:19:11.230174 sshd[4334]: Connection closed by invalid user vyos 178.160.211.111 port 49040 [preauth] Jan 29 16:19:11.231021 systemd[1]: sshd@18-139.178.70.99:22-178.160.211.111:49040.service: Deactivated successfully. Jan 29 16:19:11.442416 systemd[1]: Started sshd@19-139.178.70.99:22-178.160.211.111:49042.service - OpenSSH per-connection server daemon (178.160.211.111:49042). Jan 29 16:19:12.263739 sshd[4339]: Invalid user nagios from 178.160.211.111 port 49042 Jan 29 16:19:12.465275 sshd[4339]: Connection closed by invalid user nagios 178.160.211.111 port 49042 [preauth] Jan 29 16:19:12.466689 systemd[1]: sshd@19-139.178.70.99:22-178.160.211.111:49042.service: Deactivated successfully. Jan 29 16:19:12.674389 systemd[1]: Started sshd@20-139.178.70.99:22-178.160.211.111:49058.service - OpenSSH per-connection server daemon (178.160.211.111:49058). Jan 29 16:19:13.495178 sshd[4344]: Invalid user mcserver from 178.160.211.111 port 49058 Jan 29 16:19:13.695607 sshd[4344]: Connection closed by invalid user mcserver 178.160.211.111 port 49058 [preauth] Jan 29 16:19:13.696905 systemd[1]: sshd@20-139.178.70.99:22-178.160.211.111:49058.service: Deactivated successfully. Jan 29 16:19:13.903944 systemd[1]: Started sshd@21-139.178.70.99:22-178.160.211.111:49060.service - OpenSSH per-connection server daemon (178.160.211.111:49060). Jan 29 16:19:14.720342 sshd[4349]: Invalid user admin from 178.160.211.111 port 49060 Jan 29 16:19:14.919654 sshd[4349]: Connection closed by invalid user admin 178.160.211.111 port 49060 [preauth] Jan 29 16:19:14.920729 systemd[1]: sshd@21-139.178.70.99:22-178.160.211.111:49060.service: Deactivated successfully. Jan 29 16:19:15.130242 systemd[1]: Started sshd@22-139.178.70.99:22-178.160.211.111:49068.service - OpenSSH per-connection server daemon (178.160.211.111:49068). Jan 29 16:19:15.949859 sshd[4354]: Invalid user test from 178.160.211.111 port 49068 Jan 29 16:19:16.151072 sshd[4354]: Connection closed by invalid user test 178.160.211.111 port 49068 [preauth] Jan 29 16:19:16.152460 systemd[1]: sshd@22-139.178.70.99:22-178.160.211.111:49068.service: Deactivated successfully. Jan 29 16:19:16.362762 systemd[1]: Started sshd@23-139.178.70.99:22-178.160.211.111:49084.service - OpenSSH per-connection server daemon (178.160.211.111:49084). Jan 29 16:19:17.185431 sshd[4359]: Invalid user user from 178.160.211.111 port 49084 Jan 29 16:19:17.386605 sshd[4359]: Connection closed by invalid user user 178.160.211.111 port 49084 [preauth] Jan 29 16:19:17.387941 systemd[1]: sshd@23-139.178.70.99:22-178.160.211.111:49084.service: Deactivated successfully. Jan 29 16:19:17.597982 systemd[1]: Started sshd@24-139.178.70.99:22-178.160.211.111:49098.service - OpenSSH per-connection server daemon (178.160.211.111:49098). Jan 29 16:19:18.618870 sshd[4364]: Connection closed by authenticating user root 178.160.211.111 port 49098 [preauth] Jan 29 16:19:18.620274 systemd[1]: sshd@24-139.178.70.99:22-178.160.211.111:49098.service: Deactivated successfully. Jan 29 16:19:18.829913 systemd[1]: Started sshd@25-139.178.70.99:22-178.160.211.111:33434.service - OpenSSH per-connection server daemon (178.160.211.111:33434). Jan 29 16:19:19.650064 sshd[4371]: Invalid user admin from 178.160.211.111 port 33434 Jan 29 16:19:19.850756 sshd[4371]: Connection closed by invalid user admin 178.160.211.111 port 33434 [preauth] Jan 29 16:19:19.852160 systemd[1]: sshd@25-139.178.70.99:22-178.160.211.111:33434.service: Deactivated successfully. Jan 29 16:19:20.060710 systemd[1]: Started sshd@26-139.178.70.99:22-178.160.211.111:33440.service - OpenSSH per-connection server daemon (178.160.211.111:33440). Jan 29 16:19:20.881211 sshd[4376]: Invalid user admin from 178.160.211.111 port 33440 Jan 29 16:19:21.083658 sshd[4376]: Connection closed by invalid user admin 178.160.211.111 port 33440 [preauth] Jan 29 16:19:21.084833 systemd[1]: sshd@26-139.178.70.99:22-178.160.211.111:33440.service: Deactivated successfully. Jan 29 16:19:21.291636 systemd[1]: Started sshd@27-139.178.70.99:22-178.160.211.111:33452.service - OpenSSH per-connection server daemon (178.160.211.111:33452). Jan 29 16:19:22.107893 sshd[4381]: Invalid user api from 178.160.211.111 port 33452 Jan 29 16:19:22.308550 sshd[4381]: Connection closed by invalid user api 178.160.211.111 port 33452 [preauth] Jan 29 16:19:22.310011 systemd[1]: sshd@27-139.178.70.99:22-178.160.211.111:33452.service: Deactivated successfully. Jan 29 16:19:22.520108 systemd[1]: Started sshd@28-139.178.70.99:22-178.160.211.111:33454.service - OpenSSH per-connection server daemon (178.160.211.111:33454). Jan 29 16:19:23.341765 sshd[4386]: Invalid user cluster from 178.160.211.111 port 33454 Jan 29 16:19:23.543031 sshd[4386]: Connection closed by invalid user cluster 178.160.211.111 port 33454 [preauth] Jan 29 16:19:23.544430 systemd[1]: sshd@28-139.178.70.99:22-178.160.211.111:33454.service: Deactivated successfully. Jan 29 16:19:23.755742 systemd[1]: Started sshd@29-139.178.70.99:22-178.160.211.111:33458.service - OpenSSH per-connection server daemon (178.160.211.111:33458). Jan 29 16:19:24.570594 sshd[4391]: Invalid user admin from 178.160.211.111 port 33458 Jan 29 16:19:24.771626 sshd[4391]: Connection closed by invalid user admin 178.160.211.111 port 33458 [preauth] Jan 29 16:19:24.773033 systemd[1]: sshd@29-139.178.70.99:22-178.160.211.111:33458.service: Deactivated successfully. Jan 29 16:19:24.979390 systemd[1]: Started sshd@30-139.178.70.99:22-178.160.211.111:33468.service - OpenSSH per-connection server daemon (178.160.211.111:33468). Jan 29 16:19:26.013333 sshd[4396]: Connection closed by authenticating user root 178.160.211.111 port 33468 [preauth] Jan 29 16:19:26.014235 systemd[1]: sshd@30-139.178.70.99:22-178.160.211.111:33468.service: Deactivated successfully. Jan 29 16:19:26.221450 systemd[1]: Started sshd@31-139.178.70.99:22-178.160.211.111:33476.service - OpenSSH per-connection server daemon (178.160.211.111:33476). Jan 29 16:19:27.045612 sshd[4401]: Invalid user esuser from 178.160.211.111 port 33476 Jan 29 16:19:27.247251 sshd[4401]: Connection closed by invalid user esuser 178.160.211.111 port 33476 [preauth] Jan 29 16:19:27.248541 systemd[1]: sshd@31-139.178.70.99:22-178.160.211.111:33476.service: Deactivated successfully. Jan 29 16:19:27.455772 systemd[1]: Started sshd@32-139.178.70.99:22-178.160.211.111:33486.service - OpenSSH per-connection server daemon (178.160.211.111:33486). Jan 29 16:19:28.475571 sshd[4406]: Connection closed by authenticating user root 178.160.211.111 port 33486 [preauth] Jan 29 16:19:28.477267 systemd[1]: sshd@32-139.178.70.99:22-178.160.211.111:33486.service: Deactivated successfully. Jan 29 16:19:28.681160 systemd[1]: Started sshd@33-139.178.70.99:22-178.160.211.111:39836.service - OpenSSH per-connection server daemon (178.160.211.111:39836). Jan 29 16:19:29.571248 sshd[4411]: Invalid user deploy from 178.160.211.111 port 39836 Jan 29 16:19:29.770273 sshd[4411]: Connection closed by invalid user deploy 178.160.211.111 port 39836 [preauth] Jan 29 16:19:29.771353 systemd[1]: sshd@33-139.178.70.99:22-178.160.211.111:39836.service: Deactivated successfully. Jan 29 16:19:29.981316 systemd[1]: Started sshd@34-139.178.70.99:22-178.160.211.111:39846.service - OpenSSH per-connection server daemon (178.160.211.111:39846). Jan 29 16:19:30.797828 sshd[4416]: Invalid user ansible from 178.160.211.111 port 39846 Jan 29 16:19:30.999229 sshd[4416]: Connection closed by invalid user ansible 178.160.211.111 port 39846 [preauth] Jan 29 16:19:31.000742 systemd[1]: sshd@34-139.178.70.99:22-178.160.211.111:39846.service: Deactivated successfully. Jan 29 16:19:31.207026 systemd[1]: Started sshd@35-139.178.70.99:22-178.160.211.111:39852.service - OpenSSH per-connection server daemon (178.160.211.111:39852). Jan 29 16:19:32.023735 sshd[4421]: Invalid user usr from 178.160.211.111 port 39852 Jan 29 16:19:32.224022 sshd[4421]: Connection closed by invalid user usr 178.160.211.111 port 39852 [preauth] Jan 29 16:19:32.225128 systemd[1]: sshd@35-139.178.70.99:22-178.160.211.111:39852.service: Deactivated successfully. Jan 29 16:19:32.431936 systemd[1]: Started sshd@36-139.178.70.99:22-178.160.211.111:39854.service - OpenSSH per-connection server daemon (178.160.211.111:39854). Jan 29 16:19:33.255085 sshd[4426]: Invalid user oracle from 178.160.211.111 port 39854 Jan 29 16:19:33.455542 sshd[4426]: Connection closed by invalid user oracle 178.160.211.111 port 39854 [preauth] Jan 29 16:19:33.456699 systemd[1]: sshd@36-139.178.70.99:22-178.160.211.111:39854.service: Deactivated successfully. Jan 29 16:19:33.673773 systemd[1]: Started sshd@37-139.178.70.99:22-178.160.211.111:39862.service - OpenSSH per-connection server daemon (178.160.211.111:39862). Jan 29 16:19:34.494153 sshd[4432]: Invalid user kafka from 178.160.211.111 port 39862 Jan 29 16:19:34.695328 sshd[4432]: Connection closed by invalid user kafka 178.160.211.111 port 39862 [preauth] Jan 29 16:19:34.696721 systemd[1]: sshd@37-139.178.70.99:22-178.160.211.111:39862.service: Deactivated successfully. Jan 29 16:19:34.904869 systemd[1]: Started sshd@38-139.178.70.99:22-178.160.211.111:39864.service - OpenSSH per-connection server daemon (178.160.211.111:39864). Jan 29 16:19:35.926102 sshd[4439]: Connection closed by authenticating user root 178.160.211.111 port 39864 [preauth] Jan 29 16:19:35.927793 systemd[1]: sshd@38-139.178.70.99:22-178.160.211.111:39864.service: Deactivated successfully. Jan 29 16:19:36.135432 systemd[1]: Started sshd@39-139.178.70.99:22-178.160.211.111:39876.service - OpenSSH per-connection server daemon (178.160.211.111:39876). Jan 29 16:19:36.976750 sshd[4444]: Invalid user vyos from 178.160.211.111 port 39876 Jan 29 16:19:37.177359 sshd[4444]: Connection closed by invalid user vyos 178.160.211.111 port 39876 [preauth] Jan 29 16:19:37.178855 systemd[1]: sshd@39-139.178.70.99:22-178.160.211.111:39876.service: Deactivated successfully. Jan 29 16:19:37.386708 systemd[1]: Started sshd@40-139.178.70.99:22-178.160.211.111:39880.service - OpenSSH per-connection server daemon (178.160.211.111:39880). Jan 29 16:19:38.210110 sshd[4449]: Invalid user kubelet from 178.160.211.111 port 39880 Jan 29 16:19:38.411582 sshd[4449]: Connection closed by invalid user kubelet 178.160.211.111 port 39880 [preauth] Jan 29 16:19:38.412438 systemd[1]: sshd@40-139.178.70.99:22-178.160.211.111:39880.service: Deactivated successfully. Jan 29 16:19:38.621388 systemd[1]: Started sshd@41-139.178.70.99:22-178.160.211.111:60612.service - OpenSSH per-connection server daemon (178.160.211.111:60612). Jan 29 16:19:39.481551 sshd[4454]: Invalid user olm from 178.160.211.111 port 60612 Jan 29 16:19:39.682441 sshd[4454]: Connection closed by invalid user olm 178.160.211.111 port 60612 [preauth] Jan 29 16:19:39.683888 systemd[1]: sshd@41-139.178.70.99:22-178.160.211.111:60612.service: Deactivated successfully. Jan 29 16:19:39.893491 systemd[1]: Started sshd@42-139.178.70.99:22-178.160.211.111:60626.service - OpenSSH per-connection server daemon (178.160.211.111:60626). Jan 29 16:19:40.711866 sshd[4459]: Invalid user kafka from 178.160.211.111 port 60626 Jan 29 16:19:40.913337 sshd[4459]: Connection closed by invalid user kafka 178.160.211.111 port 60626 [preauth] Jan 29 16:19:40.914505 systemd[1]: sshd@42-139.178.70.99:22-178.160.211.111:60626.service: Deactivated successfully. Jan 29 16:19:41.124391 systemd[1]: Started sshd@43-139.178.70.99:22-178.160.211.111:60634.service - OpenSSH per-connection server daemon (178.160.211.111:60634). Jan 29 16:19:41.972337 sshd[4464]: Invalid user apiserver from 178.160.211.111 port 60634 Jan 29 16:19:42.173617 sshd[4464]: Connection closed by invalid user apiserver 178.160.211.111 port 60634 [preauth] Jan 29 16:19:42.175154 systemd[1]: sshd@43-139.178.70.99:22-178.160.211.111:60634.service: Deactivated successfully. Jan 29 16:19:42.383411 systemd[1]: Started sshd@44-139.178.70.99:22-178.160.211.111:60648.service - OpenSSH per-connection server daemon (178.160.211.111:60648). Jan 29 16:19:42.471380 systemd[1]: Started sshd@45-139.178.70.99:22-139.178.89.65:54422.service - OpenSSH per-connection server daemon (139.178.89.65:54422). Jan 29 16:19:42.507249 sshd[4472]: Accepted publickey for core from 139.178.89.65 port 54422 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:19:42.510114 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:19:42.513325 systemd-logind[1549]: New session 10 of user core. Jan 29 16:19:42.519666 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:19:43.220857 sshd[4469]: Invalid user admin from 178.160.211.111 port 60648 Jan 29 16:19:43.296535 sshd[4474]: Connection closed by 139.178.89.65 port 54422 Jan 29 16:19:43.296756 sshd-session[4472]: pam_unix(sshd:session): session closed for user core Jan 29 16:19:43.299365 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:19:43.299804 systemd[1]: sshd@45-139.178.70.99:22-139.178.89.65:54422.service: Deactivated successfully. Jan 29 16:19:43.301081 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:19:43.301742 systemd-logind[1549]: Removed session 10. Jan 29 16:19:43.421589 sshd[4469]: Connection closed by invalid user admin 178.160.211.111 port 60648 [preauth] Jan 29 16:19:43.422635 systemd[1]: sshd@44-139.178.70.99:22-178.160.211.111:60648.service: Deactivated successfully. Jan 29 16:19:43.637351 systemd[1]: Started sshd@46-139.178.70.99:22-178.160.211.111:60652.service - OpenSSH per-connection server daemon (178.160.211.111:60652). Jan 29 16:19:44.561240 sshd[4489]: Invalid user kvm from 178.160.211.111 port 60652 Jan 29 16:19:44.762480 sshd[4489]: Connection closed by invalid user kvm 178.160.211.111 port 60652 [preauth] Jan 29 16:19:44.763648 systemd[1]: sshd@46-139.178.70.99:22-178.160.211.111:60652.service: Deactivated successfully. Jan 29 16:19:44.971452 systemd[1]: Started sshd@47-139.178.70.99:22-178.160.211.111:60666.service - OpenSSH per-connection server daemon (178.160.211.111:60666). Jan 29 16:19:45.789992 sshd[4494]: Invalid user odoo from 178.160.211.111 port 60666 Jan 29 16:19:45.991625 sshd[4494]: Connection closed by invalid user odoo 178.160.211.111 port 60666 [preauth] Jan 29 16:19:45.992836 systemd[1]: sshd@47-139.178.70.99:22-178.160.211.111:60666.service: Deactivated successfully. Jan 29 16:19:46.205884 systemd[1]: Started sshd@48-139.178.70.99:22-178.160.211.111:60682.service - OpenSSH per-connection server daemon (178.160.211.111:60682). Jan 29 16:19:47.036179 sshd[4499]: Invalid user debian from 178.160.211.111 port 60682 Jan 29 16:19:47.237988 sshd[4499]: Connection closed by invalid user debian 178.160.211.111 port 60682 [preauth] Jan 29 16:19:47.239406 systemd[1]: sshd@48-139.178.70.99:22-178.160.211.111:60682.service: Deactivated successfully. Jan 29 16:19:47.455029 systemd[1]: Started sshd@49-139.178.70.99:22-178.160.211.111:60694.service - OpenSSH per-connection server daemon (178.160.211.111:60694). Jan 29 16:19:48.276839 sshd[4504]: Invalid user oracle from 178.160.211.111 port 60694 Jan 29 16:19:48.306736 systemd[1]: Started sshd@50-139.178.70.99:22-139.178.89.65:54430.service - OpenSSH per-connection server daemon (139.178.89.65:54430). Jan 29 16:19:48.349591 sshd[4507]: Accepted publickey for core from 139.178.89.65 port 54430 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:19:48.350618 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:19:48.353326 systemd-logind[1549]: New session 11 of user core. Jan 29 16:19:48.358743 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:19:48.479206 sshd[4504]: Connection closed by invalid user oracle 178.160.211.111 port 60694 [preauth] Jan 29 16:19:48.479996 systemd[1]: sshd@49-139.178.70.99:22-178.160.211.111:60694.service: Deactivated successfully. Jan 29 16:19:48.721672 sshd[4509]: Connection closed by 139.178.89.65 port 54430 Jan 29 16:19:48.722466 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Jan 29 16:19:48.727964 systemd[1]: sshd@50-139.178.70.99:22-139.178.89.65:54430.service: Deactivated successfully. Jan 29 16:19:48.729463 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:19:48.730132 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:19:48.731169 systemd-logind[1549]: Removed session 11. Jan 29 16:19:53.731861 systemd[1]: Started sshd@51-139.178.70.99:22-139.178.89.65:56242.service - OpenSSH per-connection server daemon (139.178.89.65:56242). Jan 29 16:19:53.772119 sshd[4524]: Accepted publickey for core from 139.178.89.65 port 56242 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:19:53.772869 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:19:53.776133 systemd-logind[1549]: New session 12 of user core. Jan 29 16:19:53.785643 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:19:53.874208 sshd[4526]: Connection closed by 139.178.89.65 port 56242 Jan 29 16:19:53.874942 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Jan 29 16:19:53.876890 systemd[1]: sshd@51-139.178.70.99:22-139.178.89.65:56242.service: Deactivated successfully. Jan 29 16:19:53.878021 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:19:53.878469 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:19:53.879160 systemd-logind[1549]: Removed session 12. Jan 29 16:19:58.883837 systemd[1]: Started sshd@52-139.178.70.99:22-139.178.89.65:56246.service - OpenSSH per-connection server daemon (139.178.89.65:56246). Jan 29 16:19:58.933217 sshd[4538]: Accepted publickey for core from 139.178.89.65 port 56246 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:19:58.934166 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:19:58.937223 systemd-logind[1549]: New session 13 of user core. Jan 29 16:19:58.944718 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:19:59.069897 sshd[4540]: Connection closed by 139.178.89.65 port 56246 Jan 29 16:19:59.070474 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Jan 29 16:19:59.077995 systemd[1]: sshd@52-139.178.70.99:22-139.178.89.65:56246.service: Deactivated successfully. Jan 29 16:19:59.079078 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:19:59.079973 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:19:59.083771 systemd[1]: Started sshd@53-139.178.70.99:22-139.178.89.65:56258.service - OpenSSH per-connection server daemon (139.178.89.65:56258). Jan 29 16:19:59.084855 systemd-logind[1549]: Removed session 13. Jan 29 16:19:59.123020 sshd[4551]: Accepted publickey for core from 139.178.89.65 port 56258 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:19:59.124334 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:19:59.127605 systemd-logind[1549]: New session 14 of user core. Jan 29 16:19:59.135703 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:19:59.316267 sshd[4554]: Connection closed by 139.178.89.65 port 56258 Jan 29 16:19:59.317238 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Jan 29 16:19:59.328009 systemd[1]: sshd@53-139.178.70.99:22-139.178.89.65:56258.service: Deactivated successfully. Jan 29 16:19:59.329490 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:19:59.330283 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:19:59.337635 systemd[1]: Started sshd@54-139.178.70.99:22-139.178.89.65:56268.service - OpenSSH per-connection server daemon (139.178.89.65:56268). Jan 29 16:19:59.339256 systemd-logind[1549]: Removed session 14. Jan 29 16:19:59.374750 sshd[4562]: Accepted publickey for core from 139.178.89.65 port 56268 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:19:59.375738 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:19:59.378819 systemd-logind[1549]: New session 15 of user core. Jan 29 16:19:59.387789 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:19:59.546584 sshd[4565]: Connection closed by 139.178.89.65 port 56268 Jan 29 16:19:59.548108 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Jan 29 16:19:59.555071 systemd[1]: sshd@54-139.178.70.99:22-139.178.89.65:56268.service: Deactivated successfully. Jan 29 16:19:59.556847 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:19:59.558003 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:19:59.558917 systemd-logind[1549]: Removed session 15. Jan 29 16:20:04.561757 systemd[1]: Started sshd@55-139.178.70.99:22-139.178.89.65:42894.service - OpenSSH per-connection server daemon (139.178.89.65:42894). Jan 29 16:20:04.753949 sshd[4579]: Accepted publickey for core from 139.178.89.65 port 42894 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:04.755046 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:04.764945 systemd-logind[1549]: New session 16 of user core. Jan 29 16:20:04.772670 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:20:04.919444 sshd[4581]: Connection closed by 139.178.89.65 port 42894 Jan 29 16:20:04.919781 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:04.928722 systemd[1]: sshd@55-139.178.70.99:22-139.178.89.65:42894.service: Deactivated successfully. Jan 29 16:20:04.929976 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:20:04.931080 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:20:04.931903 systemd-logind[1549]: Removed session 16. Jan 29 16:20:09.929632 systemd[1]: Started sshd@56-139.178.70.99:22-139.178.89.65:42898.service - OpenSSH per-connection server daemon (139.178.89.65:42898). Jan 29 16:20:09.972547 sshd[4592]: Accepted publickey for core from 139.178.89.65 port 42898 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:09.973276 sshd-session[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:09.976899 systemd-logind[1549]: New session 17 of user core. Jan 29 16:20:09.982639 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:20:10.085714 sshd[4594]: Connection closed by 139.178.89.65 port 42898 Jan 29 16:20:10.086184 sshd-session[4592]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:10.094914 systemd[1]: sshd@56-139.178.70.99:22-139.178.89.65:42898.service: Deactivated successfully. Jan 29 16:20:10.096153 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:20:10.096725 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:20:10.101921 systemd[1]: Started sshd@57-139.178.70.99:22-139.178.89.65:42906.service - OpenSSH per-connection server daemon (139.178.89.65:42906). Jan 29 16:20:10.104417 systemd-logind[1549]: Removed session 17. Jan 29 16:20:10.134307 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 42906 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:10.135254 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:10.138613 systemd-logind[1549]: New session 18 of user core. Jan 29 16:20:10.150683 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:20:10.462158 sshd[4607]: Connection closed by 139.178.89.65 port 42906 Jan 29 16:20:10.463028 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:10.470438 systemd[1]: sshd@57-139.178.70.99:22-139.178.89.65:42906.service: Deactivated successfully. Jan 29 16:20:10.472127 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:20:10.472710 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:20:10.476793 systemd[1]: Started sshd@58-139.178.70.99:22-139.178.89.65:42920.service - OpenSSH per-connection server daemon (139.178.89.65:42920). Jan 29 16:20:10.477894 systemd-logind[1549]: Removed session 18. Jan 29 16:20:10.521760 sshd[4616]: Accepted publickey for core from 139.178.89.65 port 42920 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:10.522572 sshd-session[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:10.525589 systemd-logind[1549]: New session 19 of user core. Jan 29 16:20:10.528651 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:20:16.693019 sshd[4619]: Connection closed by 139.178.89.65 port 42920 Jan 29 16:20:16.692910 sshd-session[4616]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:16.705197 systemd[1]: Started sshd@59-139.178.70.99:22-139.178.89.65:55256.service - OpenSSH per-connection server daemon (139.178.89.65:55256). Jan 29 16:20:16.706924 systemd[1]: sshd@58-139.178.70.99:22-139.178.89.65:42920.service: Deactivated successfully. Jan 29 16:20:16.713782 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:20:16.715850 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:20:16.720222 systemd-logind[1549]: Removed session 19. Jan 29 16:20:16.751697 sshd[4637]: Accepted publickey for core from 139.178.89.65 port 55256 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:16.752634 sshd-session[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:16.755883 systemd-logind[1549]: New session 20 of user core. Jan 29 16:20:16.759649 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:20:16.965819 sshd[4643]: Connection closed by 139.178.89.65 port 55256 Jan 29 16:20:16.966678 sshd-session[4637]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:16.974813 systemd[1]: sshd@59-139.178.70.99:22-139.178.89.65:55256.service: Deactivated successfully. Jan 29 16:20:16.976886 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:20:16.978339 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:20:16.982764 systemd[1]: Started sshd@60-139.178.70.99:22-139.178.89.65:55270.service - OpenSSH per-connection server daemon (139.178.89.65:55270). Jan 29 16:20:16.983621 systemd-logind[1549]: Removed session 20. Jan 29 16:20:17.015660 sshd[4651]: Accepted publickey for core from 139.178.89.65 port 55270 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:17.016493 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:17.019707 systemd-logind[1549]: New session 21 of user core. Jan 29 16:20:17.025724 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:20:17.113216 sshd[4654]: Connection closed by 139.178.89.65 port 55270 Jan 29 16:20:17.113946 sshd-session[4651]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:17.116385 systemd[1]: sshd@60-139.178.70.99:22-139.178.89.65:55270.service: Deactivated successfully. Jan 29 16:20:17.117839 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:20:17.118719 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:20:17.119323 systemd-logind[1549]: Removed session 21. Jan 29 16:20:22.124454 systemd[1]: Started sshd@61-139.178.70.99:22-139.178.89.65:38098.service - OpenSSH per-connection server daemon (139.178.89.65:38098). Jan 29 16:20:22.162584 sshd[4671]: Accepted publickey for core from 139.178.89.65 port 38098 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:22.163583 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:22.166039 systemd-logind[1549]: New session 22 of user core. Jan 29 16:20:22.170648 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:20:22.260547 sshd[4673]: Connection closed by 139.178.89.65 port 38098 Jan 29 16:20:22.261290 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:22.262937 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:20:22.263049 systemd[1]: sshd@61-139.178.70.99:22-139.178.89.65:38098.service: Deactivated successfully. Jan 29 16:20:22.264266 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:20:22.265173 systemd-logind[1549]: Removed session 22. Jan 29 16:20:27.270636 systemd[1]: Started sshd@62-139.178.70.99:22-139.178.89.65:38112.service - OpenSSH per-connection server daemon (139.178.89.65:38112). Jan 29 16:20:27.305580 sshd[4685]: Accepted publickey for core from 139.178.89.65 port 38112 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:27.306310 sshd-session[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:27.309659 systemd-logind[1549]: New session 23 of user core. Jan 29 16:20:27.318643 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:20:27.407819 sshd[4687]: Connection closed by 139.178.89.65 port 38112 Jan 29 16:20:27.408698 sshd-session[4685]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:27.410681 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:20:27.410781 systemd[1]: sshd@62-139.178.70.99:22-139.178.89.65:38112.service: Deactivated successfully. Jan 29 16:20:27.411930 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:20:27.412511 systemd-logind[1549]: Removed session 23. Jan 29 16:20:32.418418 systemd[1]: Started sshd@63-139.178.70.99:22-139.178.89.65:39662.service - OpenSSH per-connection server daemon (139.178.89.65:39662). Jan 29 16:20:32.453236 sshd[4699]: Accepted publickey for core from 139.178.89.65 port 39662 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:32.453923 sshd-session[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:32.456500 systemd-logind[1549]: New session 24 of user core. Jan 29 16:20:32.459668 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:20:32.551209 sshd[4701]: Connection closed by 139.178.89.65 port 39662 Jan 29 16:20:32.551811 sshd-session[4699]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:32.559071 systemd[1]: sshd@63-139.178.70.99:22-139.178.89.65:39662.service: Deactivated successfully. Jan 29 16:20:32.560199 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:20:32.561093 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:20:32.565735 systemd[1]: Started sshd@64-139.178.70.99:22-139.178.89.65:39674.service - OpenSSH per-connection server daemon (139.178.89.65:39674). Jan 29 16:20:32.566781 systemd-logind[1549]: Removed session 24. Jan 29 16:20:32.597262 sshd[4711]: Accepted publickey for core from 139.178.89.65 port 39674 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:32.598092 sshd-session[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:32.601064 systemd-logind[1549]: New session 25 of user core. Jan 29 16:20:32.604641 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:20:33.970507 containerd[1573]: time="2025-01-29T16:20:33.970459508Z" level=info msg="StopContainer for \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\" with timeout 30 (s)" Jan 29 16:20:33.980107 containerd[1573]: time="2025-01-29T16:20:33.980029730Z" level=info msg="Stop container \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\" with signal terminated" Jan 29 16:20:34.005686 systemd[1]: cri-containerd-f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec.scope: Deactivated successfully. Jan 29 16:20:34.016641 containerd[1573]: time="2025-01-29T16:20:34.016578202Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:20:34.023080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec-rootfs.mount: Deactivated successfully. Jan 29 16:20:34.023666 containerd[1573]: time="2025-01-29T16:20:34.023083076Z" level=info msg="shim disconnected" id=f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec namespace=k8s.io Jan 29 16:20:34.023666 containerd[1573]: time="2025-01-29T16:20:34.023588528Z" level=warning msg="cleaning up after shim disconnected" id=f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec namespace=k8s.io Jan 29 16:20:34.023666 containerd[1573]: time="2025-01-29T16:20:34.023598294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:20:34.025365 containerd[1573]: time="2025-01-29T16:20:34.025348046Z" level=info msg="StopContainer for \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\" with timeout 2 (s)" Jan 29 16:20:34.025636 containerd[1573]: time="2025-01-29T16:20:34.025586255Z" level=info msg="Stop container \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\" with signal terminated" Jan 29 16:20:34.031371 systemd-networkd[1478]: lxc_health: Link DOWN Jan 29 16:20:34.031819 systemd-networkd[1478]: lxc_health: Lost carrier Jan 29 16:20:34.045769 containerd[1573]: time="2025-01-29T16:20:34.045746690Z" level=info msg="StopContainer for \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\" returns successfully" Jan 29 16:20:34.046319 containerd[1573]: time="2025-01-29T16:20:34.046298964Z" level=info msg="StopPodSandbox for \"531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac\"" Jan 29 16:20:34.049929 systemd[1]: cri-containerd-9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91.scope: Deactivated successfully. Jan 29 16:20:34.050268 systemd[1]: cri-containerd-9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91.scope: Consumed 4.469s CPU time, 185.9M memory peak, 66.2M read from disk, 13.3M written to disk. Jan 29 16:20:34.059783 containerd[1573]: time="2025-01-29T16:20:34.049688522Z" level=info msg="Container to stop \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:20:34.054673 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac-shm.mount: Deactivated successfully. Jan 29 16:20:34.061639 systemd[1]: cri-containerd-531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac.scope: Deactivated successfully. Jan 29 16:20:34.068161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91-rootfs.mount: Deactivated successfully. Jan 29 16:20:34.073897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac-rootfs.mount: Deactivated successfully. Jan 29 16:20:34.112825 containerd[1573]: time="2025-01-29T16:20:34.112617868Z" level=info msg="shim disconnected" id=531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac namespace=k8s.io Jan 29 16:20:34.112946 containerd[1573]: time="2025-01-29T16:20:34.112826701Z" level=warning msg="cleaning up after shim disconnected" id=531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac namespace=k8s.io Jan 29 16:20:34.112946 containerd[1573]: time="2025-01-29T16:20:34.112839167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:20:34.113641 containerd[1573]: time="2025-01-29T16:20:34.113597866Z" level=info msg="shim disconnected" id=9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91 namespace=k8s.io Jan 29 16:20:34.113727 containerd[1573]: time="2025-01-29T16:20:34.113711529Z" level=warning msg="cleaning up after shim disconnected" id=9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91 namespace=k8s.io Jan 29 16:20:34.113855 containerd[1573]: time="2025-01-29T16:20:34.113772650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:20:34.124611 containerd[1573]: time="2025-01-29T16:20:34.124582089Z" level=info msg="TearDown network for sandbox \"531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac\" successfully" Jan 29 16:20:34.124611 containerd[1573]: time="2025-01-29T16:20:34.124604156Z" level=info msg="StopPodSandbox for \"531c6d63f8ec98e0919ce2813cf695cd4ef74cae59bcc5eadf27b2382028d3ac\" returns successfully" Jan 29 16:20:34.133426 containerd[1573]: time="2025-01-29T16:20:34.133349747Z" level=info msg="StopContainer for \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\" returns successfully" Jan 29 16:20:34.133746 containerd[1573]: time="2025-01-29T16:20:34.133599154Z" level=info msg="StopPodSandbox for \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\"" Jan 29 16:20:34.133746 containerd[1573]: time="2025-01-29T16:20:34.133618068Z" level=info msg="Container to stop \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:20:34.133746 containerd[1573]: time="2025-01-29T16:20:34.133638893Z" level=info msg="Container to stop \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:20:34.133746 containerd[1573]: time="2025-01-29T16:20:34.133644082Z" level=info msg="Container to stop \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:20:34.133746 containerd[1573]: time="2025-01-29T16:20:34.133649578Z" level=info msg="Container to stop \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:20:34.133746 containerd[1573]: time="2025-01-29T16:20:34.133655880Z" level=info msg="Container to stop \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:20:34.138480 systemd[1]: cri-containerd-427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537.scope: Deactivated successfully. Jan 29 16:20:34.199375 kubelet[2887]: I0129 16:20:34.199346 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/152c5eba-02f2-4052-8131-dc96254cb7fe-cilium-config-path\") pod \"152c5eba-02f2-4052-8131-dc96254cb7fe\" (UID: \"152c5eba-02f2-4052-8131-dc96254cb7fe\") " Jan 29 16:20:34.199754 kubelet[2887]: I0129 16:20:34.199385 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5c2c\" (UniqueName: \"kubernetes.io/projected/152c5eba-02f2-4052-8131-dc96254cb7fe-kube-api-access-r5c2c\") pod \"152c5eba-02f2-4052-8131-dc96254cb7fe\" (UID: \"152c5eba-02f2-4052-8131-dc96254cb7fe\") " Jan 29 16:20:34.207225 containerd[1573]: time="2025-01-29T16:20:34.207142341Z" level=info msg="shim disconnected" id=427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537 namespace=k8s.io Jan 29 16:20:34.207225 containerd[1573]: time="2025-01-29T16:20:34.207188470Z" level=warning msg="cleaning up after shim disconnected" id=427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537 namespace=k8s.io Jan 29 16:20:34.207225 containerd[1573]: time="2025-01-29T16:20:34.207195414Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:20:34.219367 containerd[1573]: time="2025-01-29T16:20:34.219333210Z" level=info msg="TearDown network for sandbox \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" successfully" Jan 29 16:20:34.219367 containerd[1573]: time="2025-01-29T16:20:34.219357312Z" level=info msg="StopPodSandbox for \"427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537\" returns successfully" Jan 29 16:20:34.265505 kubelet[2887]: I0129 16:20:34.242164 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/152c5eba-02f2-4052-8131-dc96254cb7fe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "152c5eba-02f2-4052-8131-dc96254cb7fe" (UID: "152c5eba-02f2-4052-8131-dc96254cb7fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:20:34.266384 kubelet[2887]: I0129 16:20:34.246062 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152c5eba-02f2-4052-8131-dc96254cb7fe-kube-api-access-r5c2c" (OuterVolumeSpecName: "kube-api-access-r5c2c") pod "152c5eba-02f2-4052-8131-dc96254cb7fe" (UID: "152c5eba-02f2-4052-8131-dc96254cb7fe"). InnerVolumeSpecName "kube-api-access-r5c2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:20:34.293366 systemd[1]: Removed slice kubepods-besteffort-pod152c5eba_02f2_4052_8131_dc96254cb7fe.slice - libcontainer container kubepods-besteffort-pod152c5eba_02f2_4052_8131_dc96254cb7fe.slice. Jan 29 16:20:34.300440 kubelet[2887]: I0129 16:20:34.300192 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qslsv\" (UniqueName: \"kubernetes.io/projected/39785176-f5d8-401f-82c9-a03a804d4538-kube-api-access-qslsv\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300440 kubelet[2887]: I0129 16:20:34.300219 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39785176-f5d8-401f-82c9-a03a804d4538-hubble-tls\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300440 kubelet[2887]: I0129 16:20:34.300236 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39785176-f5d8-401f-82c9-a03a804d4538-clustermesh-secrets\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300440 kubelet[2887]: I0129 16:20:34.300248 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-host-proc-sys-kernel\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300440 kubelet[2887]: I0129 16:20:34.300263 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39785176-f5d8-401f-82c9-a03a804d4538-cilium-config-path\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300440 kubelet[2887]: I0129 16:20:34.300280 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-bpf-maps\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300670 kubelet[2887]: I0129 16:20:34.300292 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-etc-cni-netd\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300670 kubelet[2887]: I0129 16:20:34.300303 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-host-proc-sys-net\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300670 kubelet[2887]: I0129 16:20:34.300313 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-hostproc\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300670 kubelet[2887]: I0129 16:20:34.300325 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cni-path\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300670 kubelet[2887]: I0129 16:20:34.300335 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cilium-cgroup\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300670 kubelet[2887]: I0129 16:20:34.300354 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cilium-run\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300832 kubelet[2887]: I0129 16:20:34.300364 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-lib-modules\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300832 kubelet[2887]: I0129 16:20:34.300377 2887 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-xtables-lock\") pod \"39785176-f5d8-401f-82c9-a03a804d4538\" (UID: \"39785176-f5d8-401f-82c9-a03a804d4538\") " Jan 29 16:20:34.300832 kubelet[2887]: I0129 16:20:34.300402 2887 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/152c5eba-02f2-4052-8131-dc96254cb7fe-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.300832 kubelet[2887]: I0129 16:20:34.300411 2887 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r5c2c\" (UniqueName: \"kubernetes.io/projected/152c5eba-02f2-4052-8131-dc96254cb7fe-kube-api-access-r5c2c\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.300832 kubelet[2887]: I0129 16:20:34.300447 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.305099 kubelet[2887]: I0129 16:20:34.304688 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.305099 kubelet[2887]: I0129 16:20:34.304888 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.305099 kubelet[2887]: I0129 16:20:34.304911 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.305099 kubelet[2887]: I0129 16:20:34.304934 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.305099 kubelet[2887]: I0129 16:20:34.304948 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-hostproc" (OuterVolumeSpecName: "hostproc") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.305265 kubelet[2887]: I0129 16:20:34.304960 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cni-path" (OuterVolumeSpecName: "cni-path") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.305265 kubelet[2887]: I0129 16:20:34.304974 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.305265 kubelet[2887]: I0129 16:20:34.304988 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.305265 kubelet[2887]: I0129 16:20:34.305003 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:20:34.358273 kubelet[2887]: I0129 16:20:34.358235 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39785176-f5d8-401f-82c9-a03a804d4538-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:20:34.359432 kubelet[2887]: I0129 16:20:34.359410 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39785176-f5d8-401f-82c9-a03a804d4538-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:20:34.364821 kubelet[2887]: I0129 16:20:34.364787 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39785176-f5d8-401f-82c9-a03a804d4538-kube-api-access-qslsv" (OuterVolumeSpecName: "kube-api-access-qslsv") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "kube-api-access-qslsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:20:34.364821 kubelet[2887]: I0129 16:20:34.364787 2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39785176-f5d8-401f-82c9-a03a804d4538-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "39785176-f5d8-401f-82c9-a03a804d4538" (UID: "39785176-f5d8-401f-82c9-a03a804d4538"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:20:34.401169 kubelet[2887]: I0129 16:20:34.401139 2887 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401169 kubelet[2887]: I0129 16:20:34.401165 2887 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401169 kubelet[2887]: I0129 16:20:34.401174 2887 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401333 kubelet[2887]: I0129 16:20:34.401181 2887 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401333 kubelet[2887]: I0129 16:20:34.401186 2887 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401333 kubelet[2887]: I0129 16:20:34.401192 2887 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qslsv\" (UniqueName: \"kubernetes.io/projected/39785176-f5d8-401f-82c9-a03a804d4538-kube-api-access-qslsv\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401333 kubelet[2887]: I0129 16:20:34.401199 2887 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39785176-f5d8-401f-82c9-a03a804d4538-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401333 kubelet[2887]: I0129 16:20:34.401205 2887 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39785176-f5d8-401f-82c9-a03a804d4538-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401333 kubelet[2887]: I0129 16:20:34.401211 2887 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401333 kubelet[2887]: I0129 16:20:34.401217 2887 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39785176-f5d8-401f-82c9-a03a804d4538-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401333 kubelet[2887]: I0129 16:20:34.401223 2887 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401553 kubelet[2887]: I0129 16:20:34.401228 2887 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401553 kubelet[2887]: I0129 16:20:34.401234 2887 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.401553 kubelet[2887]: I0129 16:20:34.401240 2887 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39785176-f5d8-401f-82c9-a03a804d4538-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 16:20:34.921107 kubelet[2887]: I0129 16:20:34.921085 2887 scope.go:117] "RemoveContainer" containerID="f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec" Jan 29 16:20:34.929521 containerd[1573]: time="2025-01-29T16:20:34.929373593Z" level=info msg="RemoveContainer for \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\"" Jan 29 16:20:34.934162 systemd[1]: Removed slice kubepods-burstable-pod39785176_f5d8_401f_82c9_a03a804d4538.slice - libcontainer container kubepods-burstable-pod39785176_f5d8_401f_82c9_a03a804d4538.slice. Jan 29 16:20:34.934314 systemd[1]: kubepods-burstable-pod39785176_f5d8_401f_82c9_a03a804d4538.slice: Consumed 4.525s CPU time, 187.1M memory peak, 66.3M read from disk, 13.3M written to disk. Jan 29 16:20:34.934625 containerd[1573]: time="2025-01-29T16:20:34.934416129Z" level=info msg="RemoveContainer for \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\" returns successfully" Jan 29 16:20:34.934752 kubelet[2887]: I0129 16:20:34.934580 2887 scope.go:117] "RemoveContainer" containerID="f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec" Jan 29 16:20:34.934780 containerd[1573]: time="2025-01-29T16:20:34.934695981Z" level=error msg="ContainerStatus for \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\": not found" Jan 29 16:20:34.937702 kubelet[2887]: E0129 16:20:34.937685 2887 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\": not found" containerID="f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec" Jan 29 16:20:34.939211 kubelet[2887]: I0129 16:20:34.938891 2887 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec"} err="failed to get container status \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"f870bdbf491a5e9caea5f705eac9acb08a933ba81dd1c43f9d50946f5cb542ec\": not found" Jan 29 16:20:34.939211 kubelet[2887]: I0129 16:20:34.938944 2887 scope.go:117] "RemoveContainer" containerID="9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91" Jan 29 16:20:34.940296 containerd[1573]: time="2025-01-29T16:20:34.940275574Z" level=info msg="RemoveContainer for \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\"" Jan 29 16:20:34.941848 containerd[1573]: time="2025-01-29T16:20:34.941793940Z" level=info msg="RemoveContainer for \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\" returns successfully" Jan 29 16:20:34.942014 kubelet[2887]: I0129 16:20:34.942000 2887 scope.go:117] "RemoveContainer" containerID="f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe" Jan 29 16:20:34.942461 containerd[1573]: time="2025-01-29T16:20:34.942450736Z" level=info msg="RemoveContainer for \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\"" Jan 29 16:20:34.944767 containerd[1573]: time="2025-01-29T16:20:34.944747786Z" level=info msg="RemoveContainer for \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\" returns successfully" Jan 29 16:20:34.945768 kubelet[2887]: I0129 16:20:34.945604 2887 scope.go:117] "RemoveContainer" containerID="67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00" Jan 29 16:20:34.946474 containerd[1573]: time="2025-01-29T16:20:34.946240318Z" level=info msg="RemoveContainer for \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\"" Jan 29 16:20:34.947494 containerd[1573]: time="2025-01-29T16:20:34.947477753Z" level=info msg="RemoveContainer for \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\" returns successfully" Jan 29 16:20:34.947619 kubelet[2887]: I0129 16:20:34.947611 2887 scope.go:117] "RemoveContainer" containerID="324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613" Jan 29 16:20:34.948269 containerd[1573]: time="2025-01-29T16:20:34.948230793Z" level=info msg="RemoveContainer for \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\"" Jan 29 16:20:34.949518 containerd[1573]: time="2025-01-29T16:20:34.949496116Z" level=info msg="RemoveContainer for \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\" returns successfully" Jan 29 16:20:34.949733 kubelet[2887]: I0129 16:20:34.949629 2887 scope.go:117] "RemoveContainer" containerID="41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42" Jan 29 16:20:34.950404 containerd[1573]: time="2025-01-29T16:20:34.950182973Z" level=info msg="RemoveContainer for \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\"" Jan 29 16:20:34.951357 containerd[1573]: time="2025-01-29T16:20:34.951343945Z" level=info msg="RemoveContainer for \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\" returns successfully" Jan 29 16:20:34.951570 kubelet[2887]: I0129 16:20:34.951463 2887 scope.go:117] "RemoveContainer" containerID="9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91" Jan 29 16:20:34.951712 containerd[1573]: time="2025-01-29T16:20:34.951657989Z" level=error msg="ContainerStatus for \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\": not found" Jan 29 16:20:34.951853 kubelet[2887]: E0129 16:20:34.951798 2887 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\": not found" containerID="9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91" Jan 29 16:20:34.951853 kubelet[2887]: I0129 16:20:34.951812 2887 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91"} err="failed to get container status \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\": rpc error: code = NotFound desc = an error occurred when try to find container \"9726c292aa44fc1200fd26af6aa4a90db92b1e48d66ee57222d73e5113bbff91\": not found" Jan 29 16:20:34.951853 kubelet[2887]: I0129 16:20:34.951824 2887 scope.go:117] "RemoveContainer" containerID="f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe" Jan 29 16:20:34.952187 containerd[1573]: time="2025-01-29T16:20:34.951989219Z" level=error msg="ContainerStatus for \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\": not found" Jan 29 16:20:34.952228 kubelet[2887]: E0129 16:20:34.952147 2887 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\": not found" containerID="f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe" Jan 29 16:20:34.952416 kubelet[2887]: I0129 16:20:34.952262 2887 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe"} err="failed to get container status \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4464f278ef10baea3d4b60432ce79a0b9083a090f98828a26b28d3fec84a4fe\": not found" Jan 29 16:20:34.952416 kubelet[2887]: I0129 16:20:34.952274 2887 scope.go:117] "RemoveContainer" containerID="67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00" Jan 29 16:20:34.952472 containerd[1573]: time="2025-01-29T16:20:34.952371031Z" level=error msg="ContainerStatus for \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\": not found" Jan 29 16:20:34.952616 kubelet[2887]: E0129 16:20:34.952515 2887 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\": not found" containerID="67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00" Jan 29 16:20:34.952616 kubelet[2887]: I0129 16:20:34.952525 2887 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00"} err="failed to get container status \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\": rpc error: code = NotFound desc = an error occurred when try to find container \"67fe565a0a7e111d7e5ee3797665057b9c2aca869e5cb46a58af7a9fa2fe1d00\": not found" Jan 29 16:20:34.952616 kubelet[2887]: I0129 16:20:34.952535 2887 scope.go:117] "RemoveContainer" containerID="324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613" Jan 29 16:20:34.952799 containerd[1573]: time="2025-01-29T16:20:34.952705157Z" level=error msg="ContainerStatus for \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\": not found" Jan 29 16:20:34.952955 kubelet[2887]: E0129 16:20:34.952866 2887 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\": not found" containerID="324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613" Jan 29 16:20:34.953090 kubelet[2887]: I0129 16:20:34.952880 2887 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613"} err="failed to get container status \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\": rpc error: code = NotFound desc = an error occurred when try to find container \"324c8dd38ec7be8c66c9c73a770ee30254b21703e2fe589bdece315748b0f613\": not found" Jan 29 16:20:34.953090 kubelet[2887]: I0129 16:20:34.952999 2887 scope.go:117] "RemoveContainer" containerID="41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42" Jan 29 16:20:34.953295 containerd[1573]: time="2025-01-29T16:20:34.953206106Z" level=error msg="ContainerStatus for \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\": not found" Jan 29 16:20:34.953446 kubelet[2887]: E0129 16:20:34.953379 2887 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\": not found" containerID="41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42" Jan 29 16:20:34.953446 kubelet[2887]: I0129 16:20:34.953395 2887 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42"} err="failed to get container status \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\": rpc error: code = NotFound desc = an error occurred when try to find container \"41f5b548e28725098abd3a6ddb5c3ddc4903b44c5dacbf49630a90808c919e42\": not found" Jan 29 16:20:35.002853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537-rootfs.mount: Deactivated successfully. Jan 29 16:20:35.002937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-427322c5f57b6139bf36bc53bc76b7446b396d88a33f1279cf40b75e5435e537-shm.mount: Deactivated successfully. Jan 29 16:20:35.002997 systemd[1]: var-lib-kubelet-pods-152c5eba\x2d02f2\x2d4052\x2d8131\x2ddc96254cb7fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr5c2c.mount: Deactivated successfully. Jan 29 16:20:35.003059 systemd[1]: var-lib-kubelet-pods-39785176\x2df5d8\x2d401f\x2d82c9\x2da03a804d4538-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqslsv.mount: Deactivated successfully. Jan 29 16:20:35.003112 systemd[1]: var-lib-kubelet-pods-39785176\x2df5d8\x2d401f\x2d82c9\x2da03a804d4538-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:20:35.003167 systemd[1]: var-lib-kubelet-pods-39785176\x2df5d8\x2d401f\x2d82c9\x2da03a804d4538-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:20:35.933250 sshd[4714]: Connection closed by 139.178.89.65 port 39674 Jan 29 16:20:35.933618 sshd-session[4711]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:35.939706 systemd[1]: sshd@64-139.178.70.99:22-139.178.89.65:39674.service: Deactivated successfully. Jan 29 16:20:35.941076 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:20:35.942088 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:20:35.949126 systemd[1]: Started sshd@65-139.178.70.99:22-139.178.89.65:39684.service - OpenSSH per-connection server daemon (139.178.89.65:39684). Jan 29 16:20:35.950651 systemd-logind[1549]: Removed session 25. Jan 29 16:20:35.990089 sshd[4874]: Accepted publickey for core from 139.178.89.65 port 39684 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:35.990772 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:35.994280 systemd-logind[1549]: New session 26 of user core. Jan 29 16:20:36.001642 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:20:36.267931 kubelet[2887]: I0129 16:20:36.267909 2887 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152c5eba-02f2-4052-8131-dc96254cb7fe" path="/var/lib/kubelet/pods/152c5eba-02f2-4052-8131-dc96254cb7fe/volumes" Jan 29 16:20:36.268413 kubelet[2887]: I0129 16:20:36.268399 2887 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39785176-f5d8-401f-82c9-a03a804d4538" path="/var/lib/kubelet/pods/39785176-f5d8-401f-82c9-a03a804d4538/volumes" Jan 29 16:20:36.431576 sshd[4877]: Connection closed by 139.178.89.65 port 39684 Jan 29 16:20:36.431402 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:36.442736 systemd[1]: sshd@65-139.178.70.99:22-139.178.89.65:39684.service: Deactivated successfully. Jan 29 16:20:36.444887 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:20:36.446056 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:20:36.453667 systemd[1]: Started sshd@66-139.178.70.99:22-139.178.89.65:39690.service - OpenSSH per-connection server daemon (139.178.89.65:39690). Jan 29 16:20:36.458031 systemd-logind[1549]: Removed session 26. Jan 29 16:20:36.464607 kubelet[2887]: I0129 16:20:36.464307 2887 topology_manager.go:215] "Topology Admit Handler" podUID="33de9469-75f9-4c47-b1a5-0e7865244475" podNamespace="kube-system" podName="cilium-f56rd" Jan 29 16:20:36.465394 kubelet[2887]: E0129 16:20:36.465375 2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39785176-f5d8-401f-82c9-a03a804d4538" containerName="cilium-agent" Jan 29 16:20:36.465394 kubelet[2887]: E0129 16:20:36.465390 2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39785176-f5d8-401f-82c9-a03a804d4538" containerName="mount-cgroup" Jan 29 16:20:36.465394 kubelet[2887]: E0129 16:20:36.465394 2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39785176-f5d8-401f-82c9-a03a804d4538" containerName="apply-sysctl-overwrites" Jan 29 16:20:36.465466 kubelet[2887]: E0129 16:20:36.465398 2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39785176-f5d8-401f-82c9-a03a804d4538" containerName="clean-cilium-state" Jan 29 16:20:36.465466 kubelet[2887]: E0129 16:20:36.465402 2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39785176-f5d8-401f-82c9-a03a804d4538" containerName="mount-bpf-fs" Jan 29 16:20:36.465466 kubelet[2887]: E0129 16:20:36.465405 2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="152c5eba-02f2-4052-8131-dc96254cb7fe" containerName="cilium-operator" Jan 29 16:20:36.465466 kubelet[2887]: I0129 16:20:36.465426 2887 memory_manager.go:354] "RemoveStaleState removing state" podUID="39785176-f5d8-401f-82c9-a03a804d4538" containerName="cilium-agent" Jan 29 16:20:36.465466 kubelet[2887]: I0129 16:20:36.465431 2887 memory_manager.go:354] "RemoveStaleState removing state" podUID="152c5eba-02f2-4052-8131-dc96254cb7fe" containerName="cilium-operator" Jan 29 16:20:36.483001 systemd[1]: Created slice kubepods-burstable-pod33de9469_75f9_4c47_b1a5_0e7865244475.slice - libcontainer container kubepods-burstable-pod33de9469_75f9_4c47_b1a5_0e7865244475.slice. Jan 29 16:20:36.502238 sshd[4887]: Accepted publickey for core from 139.178.89.65 port 39690 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:36.502772 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:36.507346 systemd-logind[1549]: New session 27 of user core. Jan 29 16:20:36.510885 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 16:20:36.514100 kubelet[2887]: I0129 16:20:36.513873 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-lib-modules\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514100 kubelet[2887]: I0129 16:20:36.513896 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33de9469-75f9-4c47-b1a5-0e7865244475-hubble-tls\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514100 kubelet[2887]: I0129 16:20:36.513910 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zqj\" (UniqueName: \"kubernetes.io/projected/33de9469-75f9-4c47-b1a5-0e7865244475-kube-api-access-67zqj\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514100 kubelet[2887]: I0129 16:20:36.513924 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-cni-path\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514100 kubelet[2887]: I0129 16:20:36.513934 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-cilium-cgroup\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514100 kubelet[2887]: I0129 16:20:36.513944 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33de9469-75f9-4c47-b1a5-0e7865244475-cilium-config-path\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514282 kubelet[2887]: I0129 16:20:36.513954 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-host-proc-sys-net\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514282 kubelet[2887]: I0129 16:20:36.513966 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-bpf-maps\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514282 kubelet[2887]: I0129 16:20:36.513978 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33de9469-75f9-4c47-b1a5-0e7865244475-clustermesh-secrets\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514282 kubelet[2887]: I0129 16:20:36.513987 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-host-proc-sys-kernel\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514282 kubelet[2887]: I0129 16:20:36.513997 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-cilium-run\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514282 kubelet[2887]: I0129 16:20:36.514006 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-xtables-lock\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514380 kubelet[2887]: I0129 16:20:36.514014 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/33de9469-75f9-4c47-b1a5-0e7865244475-cilium-ipsec-secrets\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514380 kubelet[2887]: I0129 16:20:36.514025 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-hostproc\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.514380 kubelet[2887]: I0129 16:20:36.514034 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33de9469-75f9-4c47-b1a5-0e7865244475-etc-cni-netd\") pod \"cilium-f56rd\" (UID: \"33de9469-75f9-4c47-b1a5-0e7865244475\") " pod="kube-system/cilium-f56rd" Jan 29 16:20:36.559951 sshd[4890]: Connection closed by 139.178.89.65 port 39690 Jan 29 16:20:36.560859 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:36.570803 systemd[1]: sshd@66-139.178.70.99:22-139.178.89.65:39690.service: Deactivated successfully. Jan 29 16:20:36.571751 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 16:20:36.572685 systemd-logind[1549]: Session 27 logged out. Waiting for processes to exit. Jan 29 16:20:36.575773 systemd[1]: Started sshd@67-139.178.70.99:22-139.178.89.65:39706.service - OpenSSH per-connection server daemon (139.178.89.65:39706). Jan 29 16:20:36.576803 systemd-logind[1549]: Removed session 27. Jan 29 16:20:36.609572 sshd[4896]: Accepted publickey for core from 139.178.89.65 port 39706 ssh2: RSA SHA256:6LYGTD2d+WJ9CHN26VIWYEcYfDEeR6/GPdyObBNeTC0 Jan 29 16:20:36.610359 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:36.613352 systemd-logind[1549]: New session 28 of user core. Jan 29 16:20:36.618817 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 16:20:36.788808 containerd[1573]: time="2025-01-29T16:20:36.788778144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f56rd,Uid:33de9469-75f9-4c47-b1a5-0e7865244475,Namespace:kube-system,Attempt:0,}" Jan 29 16:20:36.804345 containerd[1573]: time="2025-01-29T16:20:36.803924448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:20:36.804345 containerd[1573]: time="2025-01-29T16:20:36.803963874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:20:36.804345 containerd[1573]: time="2025-01-29T16:20:36.803972280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:20:36.804345 containerd[1573]: time="2025-01-29T16:20:36.804033023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:20:36.817762 systemd[1]: Started cri-containerd-2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509.scope - libcontainer container 2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509. Jan 29 16:20:36.835519 containerd[1573]: time="2025-01-29T16:20:36.835484669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f56rd,Uid:33de9469-75f9-4c47-b1a5-0e7865244475,Namespace:kube-system,Attempt:0,} returns sandbox id \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\"" Jan 29 16:20:36.838551 containerd[1573]: time="2025-01-29T16:20:36.838458855Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:20:36.847993 containerd[1573]: time="2025-01-29T16:20:36.847945935Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c26fe3e5406e20dae1a888085acbb542e330a62cba85e2f6cd8a65442e23d61\"" Jan 29 16:20:36.849502 containerd[1573]: time="2025-01-29T16:20:36.848943326Z" level=info msg="StartContainer for \"7c26fe3e5406e20dae1a888085acbb542e330a62cba85e2f6cd8a65442e23d61\"" Jan 29 16:20:36.870712 systemd[1]: Started cri-containerd-7c26fe3e5406e20dae1a888085acbb542e330a62cba85e2f6cd8a65442e23d61.scope - libcontainer container 7c26fe3e5406e20dae1a888085acbb542e330a62cba85e2f6cd8a65442e23d61. Jan 29 16:20:36.886492 containerd[1573]: time="2025-01-29T16:20:36.886452335Z" level=info msg="StartContainer for \"7c26fe3e5406e20dae1a888085acbb542e330a62cba85e2f6cd8a65442e23d61\" returns successfully" Jan 29 16:20:36.903114 systemd[1]: cri-containerd-7c26fe3e5406e20dae1a888085acbb542e330a62cba85e2f6cd8a65442e23d61.scope: Deactivated successfully. Jan 29 16:20:36.903289 systemd[1]: cri-containerd-7c26fe3e5406e20dae1a888085acbb542e330a62cba85e2f6cd8a65442e23d61.scope: Consumed 12ms CPU time, 9.6M memory peak, 3.2M read from disk. Jan 29 16:20:36.928023 containerd[1573]: time="2025-01-29T16:20:36.927933440Z" level=info msg="shim disconnected" id=7c26fe3e5406e20dae1a888085acbb542e330a62cba85e2f6cd8a65442e23d61 namespace=k8s.io Jan 29 16:20:36.928023 containerd[1573]: time="2025-01-29T16:20:36.927977176Z" level=warning msg="cleaning up after shim disconnected" id=7c26fe3e5406e20dae1a888085acbb542e330a62cba85e2f6cd8a65442e23d61 namespace=k8s.io Jan 29 16:20:36.928023 containerd[1573]: time="2025-01-29T16:20:36.927984465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:20:36.937812 containerd[1573]: time="2025-01-29T16:20:36.937782835Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:20:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:20:37.935808 containerd[1573]: time="2025-01-29T16:20:37.935780191Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:20:37.943598 containerd[1573]: time="2025-01-29T16:20:37.943459385Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285\"" Jan 29 16:20:37.945348 containerd[1573]: time="2025-01-29T16:20:37.944781873Z" level=info msg="StartContainer for \"0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285\"" Jan 29 16:20:37.965640 systemd[1]: Started cri-containerd-0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285.scope - libcontainer container 0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285. Jan 29 16:20:37.989946 containerd[1573]: time="2025-01-29T16:20:37.988825018Z" level=info msg="StartContainer for \"0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285\" returns successfully" Jan 29 16:20:38.021632 systemd[1]: cri-containerd-0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285.scope: Deactivated successfully. Jan 29 16:20:38.022450 systemd[1]: cri-containerd-0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285.scope: Consumed 11ms CPU time, 7.3M memory peak, 2.2M read from disk. Jan 29 16:20:38.041678 containerd[1573]: time="2025-01-29T16:20:38.041606197Z" level=info msg="shim disconnected" id=0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285 namespace=k8s.io Jan 29 16:20:38.041678 containerd[1573]: time="2025-01-29T16:20:38.041642565Z" level=warning msg="cleaning up after shim disconnected" id=0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285 namespace=k8s.io Jan 29 16:20:38.041678 containerd[1573]: time="2025-01-29T16:20:38.041647985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:20:38.324158 kubelet[2887]: E0129 16:20:38.324091 2887 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:20:38.623171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b2fc86f85ed8d3a2ea513bedf0999c846be6e293b0a22a988c6279200fe7285-rootfs.mount: Deactivated successfully. Jan 29 16:20:38.935632 containerd[1573]: time="2025-01-29T16:20:38.935499982Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:20:39.009285 containerd[1573]: time="2025-01-29T16:20:39.009249780Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0\"" Jan 29 16:20:39.009697 containerd[1573]: time="2025-01-29T16:20:39.009582391Z" level=info msg="StartContainer for \"387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0\"" Jan 29 16:20:39.039743 systemd[1]: Started cri-containerd-387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0.scope - libcontainer container 387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0. Jan 29 16:20:39.064774 containerd[1573]: time="2025-01-29T16:20:39.064697635Z" level=info msg="StartContainer for \"387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0\" returns successfully" Jan 29 16:20:39.071859 systemd[1]: cri-containerd-387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0.scope: Deactivated successfully. Jan 29 16:20:39.092623 containerd[1573]: time="2025-01-29T16:20:39.092511540Z" level=info msg="shim disconnected" id=387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0 namespace=k8s.io Jan 29 16:20:39.092787 containerd[1573]: time="2025-01-29T16:20:39.092775707Z" level=warning msg="cleaning up after shim disconnected" id=387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0 namespace=k8s.io Jan 29 16:20:39.092924 containerd[1573]: time="2025-01-29T16:20:39.092867015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:20:39.622095 systemd[1]: run-containerd-runc-k8s.io-387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0-runc.jpKAfG.mount: Deactivated successfully. Jan 29 16:20:39.622170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-387e036cc00c7fcbeb560189accb8f78a227f7f27dab1310b8473752926027c0-rootfs.mount: Deactivated successfully. Jan 29 16:20:39.941285 containerd[1573]: time="2025-01-29T16:20:39.941125975Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:20:39.948864 containerd[1573]: time="2025-01-29T16:20:39.948629724Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2acfc5671cad1e059ff46f9770c8fb3b90a6df14157afeab971459938adb591b\"" Jan 29 16:20:39.951594 containerd[1573]: time="2025-01-29T16:20:39.950073478Z" level=info msg="StartContainer for \"2acfc5671cad1e059ff46f9770c8fb3b90a6df14157afeab971459938adb591b\"" Jan 29 16:20:39.951578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977626527.mount: Deactivated successfully. Jan 29 16:20:39.979655 systemd[1]: Started cri-containerd-2acfc5671cad1e059ff46f9770c8fb3b90a6df14157afeab971459938adb591b.scope - libcontainer container 2acfc5671cad1e059ff46f9770c8fb3b90a6df14157afeab971459938adb591b. Jan 29 16:20:39.993230 systemd[1]: cri-containerd-2acfc5671cad1e059ff46f9770c8fb3b90a6df14157afeab971459938adb591b.scope: Deactivated successfully. Jan 29 16:20:40.013854 containerd[1573]: time="2025-01-29T16:20:40.013817505Z" level=info msg="StartContainer for \"2acfc5671cad1e059ff46f9770c8fb3b90a6df14157afeab971459938adb591b\" returns successfully" Jan 29 16:20:40.081265 containerd[1573]: time="2025-01-29T16:20:40.081202462Z" level=info msg="shim disconnected" id=2acfc5671cad1e059ff46f9770c8fb3b90a6df14157afeab971459938adb591b namespace=k8s.io Jan 29 16:20:40.081499 containerd[1573]: time="2025-01-29T16:20:40.081328704Z" level=warning msg="cleaning up after shim disconnected" id=2acfc5671cad1e059ff46f9770c8fb3b90a6df14157afeab971459938adb591b namespace=k8s.io Jan 29 16:20:40.081499 containerd[1573]: time="2025-01-29T16:20:40.081338463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:20:40.622515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2acfc5671cad1e059ff46f9770c8fb3b90a6df14157afeab971459938adb591b-rootfs.mount: Deactivated successfully. Jan 29 16:20:40.944078 containerd[1573]: time="2025-01-29T16:20:40.943595744Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:20:40.958583 containerd[1573]: time="2025-01-29T16:20:40.958519998Z" level=info msg="CreateContainer within sandbox \"2111c925f3a8629974f39fabff05807ea12b5fcb8924d9dd20a4670088494509\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4e1e8e688a05ab144b901f824117986ff6a93633cd588cbd373d0fe905bb8862\"" Jan 29 16:20:40.962703 containerd[1573]: time="2025-01-29T16:20:40.960672561Z" level=info msg="StartContainer for \"4e1e8e688a05ab144b901f824117986ff6a93633cd588cbd373d0fe905bb8862\"" Jan 29 16:20:40.962309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519353240.mount: Deactivated successfully. Jan 29 16:20:40.984187 systemd[1]: Started cri-containerd-4e1e8e688a05ab144b901f824117986ff6a93633cd588cbd373d0fe905bb8862.scope - libcontainer container 4e1e8e688a05ab144b901f824117986ff6a93633cd588cbd373d0fe905bb8862. Jan 29 16:20:41.007822 containerd[1573]: time="2025-01-29T16:20:41.007791737Z" level=info msg="StartContainer for \"4e1e8e688a05ab144b901f824117986ff6a93633cd588cbd373d0fe905bb8862\" returns successfully" Jan 29 16:20:41.099355 kubelet[2887]: I0129 16:20:41.099286 2887 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:20:41Z","lastTransitionTime":"2025-01-29T16:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:20:41.938586 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 16:20:41.965908 kubelet[2887]: I0129 16:20:41.964880 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f56rd" podStartSLOduration=5.964864743 podStartE2EDuration="5.964864743s" podCreationTimestamp="2025-01-29 16:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:20:41.964202309 +0000 UTC m=+143.872278452" watchObservedRunningTime="2025-01-29 16:20:41.964864743 +0000 UTC m=+143.872940887" Jan 29 16:20:49.737260 systemd-networkd[1478]: lxc_health: Link UP Jan 29 16:20:49.754949 systemd-networkd[1478]: lxc_health: Gained carrier Jan 29 16:20:49.831529 systemd[1]: run-containerd-runc-k8s.io-4e1e8e688a05ab144b901f824117986ff6a93633cd588cbd373d0fe905bb8862-runc.1lfAKQ.mount: Deactivated successfully. Jan 29 16:20:51.751685 systemd-networkd[1478]: lxc_health: Gained IPv6LL Jan 29 16:20:54.163611 systemd[1]: run-containerd-runc-k8s.io-4e1e8e688a05ab144b901f824117986ff6a93633cd588cbd373d0fe905bb8862-runc.T5Sae3.mount: Deactivated successfully. Jan 29 16:21:02.554603 systemd[1]: run-containerd-runc-k8s.io-4e1e8e688a05ab144b901f824117986ff6a93633cd588cbd373d0fe905bb8862-runc.xKShgU.mount: Deactivated successfully. Jan 29 16:21:04.642083 systemd[1]: run-containerd-runc-k8s.io-4e1e8e688a05ab144b901f824117986ff6a93633cd588cbd373d0fe905bb8862-runc.xL2lxK.mount: Deactivated successfully. Jan 29 16:21:04.689447 sshd[4903]: Connection closed by 139.178.89.65 port 39706 Jan 29 16:21:04.692055 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:04.694947 systemd-logind[1549]: Session 28 logged out. Waiting for processes to exit. Jan 29 16:21:04.695773 systemd[1]: sshd@67-139.178.70.99:22-139.178.89.65:39706.service: Deactivated successfully. Jan 29 16:21:04.697055 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 16:21:04.698421 systemd-logind[1549]: Removed session 28. Jan 29 16:21:05.877790 systemd[1]: Started sshd@68-139.178.70.99:22-205.210.31.174:61196.service - OpenSSH per-connection server daemon (205.210.31.174:61196).