Jan 17 12:13:02.749577 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:13:02.749594 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:13:02.749601 kernel: Disabled fast string operations Jan 17 12:13:02.749605 kernel: BIOS-provided physical RAM map: Jan 17 12:13:02.749609 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 17 12:13:02.749613 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 17 12:13:02.749619 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 17 12:13:02.749623 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 17 12:13:02.749628 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 17 12:13:02.749632 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 17 12:13:02.749636 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 17 12:13:02.749640 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 17 12:13:02.749644 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 17 12:13:02.749648 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 17 12:13:02.749655 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 17 12:13:02.749660 kernel: NX (Execute Disable) protection: active Jan 17 12:13:02.749665 kernel: APIC: Static calls initialized Jan 17 12:13:02.749669 kernel: SMBIOS 2.7 present. Jan 17 12:13:02.749674 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 17 12:13:02.749679 kernel: vmware: hypercall mode: 0x00 Jan 17 12:13:02.749684 kernel: Hypervisor detected: VMware Jan 17 12:13:02.749689 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 17 12:13:02.749695 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 17 12:13:02.749704 kernel: vmware: using clock offset of 2760972157 ns Jan 17 12:13:02.749709 kernel: tsc: Detected 3408.000 MHz processor Jan 17 12:13:02.749714 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:13:02.749720 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:13:02.749725 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 17 12:13:02.749729 kernel: total RAM covered: 3072M Jan 17 12:13:02.749734 kernel: Found optimal setting for mtrr clean up Jan 17 12:13:02.749740 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 17 12:13:02.749746 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 17 12:13:02.749751 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:13:02.749756 kernel: Using GB pages for direct mapping Jan 17 12:13:02.749761 kernel: ACPI: Early table checksum verification disabled Jan 17 12:13:02.749766 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 17 12:13:02.749770 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 17 12:13:02.749775 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 17 12:13:02.749780 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 17 12:13:02.749791 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 17 12:13:02.749800 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 17 12:13:02.749805 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 17 12:13:02.749811 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 17 12:13:02.749816 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 17 12:13:02.749821 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 17 12:13:02.749827 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 17 12:13:02.749833 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 17 12:13:02.749838 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 17 12:13:02.749843 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 17 12:13:02.749848 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 17 12:13:02.749853 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 17 12:13:02.749858 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 17 12:13:02.749864 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 17 12:13:02.749869 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 17 12:13:02.749874 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 17 12:13:02.749880 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 17 12:13:02.749898 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 17 12:13:02.749906 kernel: system APIC only can use physical flat Jan 17 12:13:02.749912 kernel: APIC: Switched APIC routing to: physical flat Jan 17 12:13:02.749917 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:13:02.749922 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 12:13:02.749927 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 12:13:02.749932 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 12:13:02.749937 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 12:13:02.749944 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 12:13:02.749949 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 12:13:02.749954 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 12:13:02.749959 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 17 12:13:02.749964 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 17 12:13:02.749969 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 17 12:13:02.749974 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 17 12:13:02.749979 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 17 12:13:02.749984 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 17 12:13:02.749989 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 17 12:13:02.749995 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 17 12:13:02.750000 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 17 12:13:02.750005 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 17 12:13:02.750010 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 17 12:13:02.750015 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 17 12:13:02.750020 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 17 12:13:02.750026 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 17 12:13:02.750031 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 17 12:13:02.750036 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 17 12:13:02.750041 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 17 12:13:02.750047 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 17 12:13:02.750052 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 17 12:13:02.750057 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 17 12:13:02.750062 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 17 12:13:02.750067 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 17 12:13:02.750072 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 17 12:13:02.750077 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 17 12:13:02.750082 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 17 12:13:02.750087 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 17 12:13:02.750093 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 17 12:13:02.750098 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 17 12:13:02.750104 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 17 12:13:02.750109 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 17 12:13:02.750114 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 17 12:13:02.750119 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 17 12:13:02.750124 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 17 12:13:02.750129 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 17 12:13:02.750134 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 17 12:13:02.750139 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 17 12:13:02.750144 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 17 12:13:02.750150 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 17 12:13:02.750156 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 17 12:13:02.750161 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 17 12:13:02.750166 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 17 12:13:02.750171 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 17 12:13:02.750176 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 17 12:13:02.750181 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 17 12:13:02.750186 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 17 12:13:02.750191 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 17 12:13:02.750196 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 17 12:13:02.750201 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 17 12:13:02.750207 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 17 12:13:02.750212 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 17 12:13:02.750217 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 17 12:13:02.750227 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 17 12:13:02.750232 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 17 12:13:02.750238 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 17 12:13:02.750243 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 17 12:13:02.750248 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 17 12:13:02.750255 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 17 12:13:02.750260 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 17 12:13:02.750266 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 17 12:13:02.750271 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 17 12:13:02.750277 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 17 12:13:02.750282 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 17 12:13:02.750287 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 17 12:13:02.750293 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 17 12:13:02.750298 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 17 12:13:02.750303 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 17 12:13:02.750310 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 17 12:13:02.750315 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 17 12:13:02.750321 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 17 12:13:02.750326 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 17 12:13:02.750331 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 17 12:13:02.750337 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 17 12:13:02.750342 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 17 12:13:02.750347 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 17 12:13:02.750353 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 17 12:13:02.750358 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 17 12:13:02.750365 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 17 12:13:02.750370 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 17 12:13:02.750376 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 17 12:13:02.750381 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 17 12:13:02.750386 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 17 12:13:02.750392 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 17 12:13:02.750397 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 17 12:13:02.750402 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 17 12:13:02.750408 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 17 12:13:02.750413 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 17 12:13:02.750418 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 17 12:13:02.750425 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 17 12:13:02.750430 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 17 12:13:02.750436 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 17 12:13:02.750441 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 17 12:13:02.750446 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 17 12:13:02.750452 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 17 12:13:02.750457 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 17 12:13:02.750463 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 17 12:13:02.750468 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 17 12:13:02.750473 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 17 12:13:02.750480 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 17 12:13:02.750485 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 17 12:13:02.750491 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 17 12:13:02.750496 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 17 12:13:02.750501 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 17 12:13:02.750507 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 17 12:13:02.750512 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 17 12:13:02.750518 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 17 12:13:02.750523 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 17 12:13:02.750530 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 17 12:13:02.750535 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 17 12:13:02.750540 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 17 12:13:02.750546 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 17 12:13:02.750551 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 17 12:13:02.750557 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 17 12:13:02.750562 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 17 12:13:02.750567 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 17 12:13:02.750573 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 17 12:13:02.750578 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 17 12:13:02.750583 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 17 12:13:02.750590 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 17 12:13:02.750595 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 17 12:13:02.750601 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 17 12:13:02.750606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:13:02.750612 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:13:02.750617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 17 12:13:02.750623 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 17 12:13:02.750629 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 17 12:13:02.750634 kernel: Zone ranges: Jan 17 12:13:02.750641 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:13:02.750647 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 17 12:13:02.750652 kernel: Normal empty Jan 17 12:13:02.750657 kernel: Movable zone start for each node Jan 17 12:13:02.750663 kernel: Early memory node ranges Jan 17 12:13:02.750669 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 17 12:13:02.750674 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 17 12:13:02.750679 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 17 12:13:02.750685 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 17 12:13:02.750692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:13:02.750698 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 17 12:13:02.750703 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 17 12:13:02.750709 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 17 12:13:02.750714 kernel: system APIC only can use physical flat Jan 17 12:13:02.750720 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 17 12:13:02.750725 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 17 12:13:02.750731 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 17 12:13:02.750736 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 17 12:13:02.750741 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 17 12:13:02.750748 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 17 12:13:02.750754 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 17 12:13:02.750759 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 17 12:13:02.750764 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 17 12:13:02.750770 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 17 12:13:02.750775 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 17 12:13:02.750781 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 17 12:13:02.750786 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 17 12:13:02.750791 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 17 12:13:02.750797 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 17 12:13:02.750803 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 17 12:13:02.750809 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 17 12:13:02.750814 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 17 12:13:02.750820 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 17 12:13:02.750825 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 17 12:13:02.750831 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 17 12:13:02.750836 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 17 12:13:02.750842 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 17 12:13:02.750847 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 17 12:13:02.750854 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 17 12:13:02.750860 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 17 12:13:02.750865 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 17 12:13:02.750871 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 17 12:13:02.750876 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 17 12:13:02.750882 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 17 12:13:02.752920 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 17 12:13:02.752930 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 17 12:13:02.752936 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 17 12:13:02.752941 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 17 12:13:02.752949 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 17 12:13:02.752955 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 17 12:13:02.752960 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 17 12:13:02.752966 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 17 12:13:02.752971 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 17 12:13:02.752977 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 17 12:13:02.752982 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 17 12:13:02.752988 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 17 12:13:02.752993 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 17 12:13:02.753000 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 17 12:13:02.753005 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 17 12:13:02.753011 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 17 12:13:02.753016 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 17 12:13:02.753022 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 17 12:13:02.753027 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 17 12:13:02.753033 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 17 12:13:02.753038 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 17 12:13:02.753044 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 17 12:13:02.753049 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 17 12:13:02.753056 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 17 12:13:02.753062 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 17 12:13:02.753067 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 17 12:13:02.753073 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 17 12:13:02.753078 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 17 12:13:02.753084 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 17 12:13:02.753089 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 17 12:13:02.753095 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 17 12:13:02.753100 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 17 12:13:02.753107 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 17 12:13:02.753112 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 17 12:13:02.753118 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 17 12:13:02.753123 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 17 12:13:02.753129 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 17 12:13:02.753134 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 17 12:13:02.753139 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 17 12:13:02.753145 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 17 12:13:02.753150 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 17 12:13:02.753156 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 17 12:13:02.753163 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 17 12:13:02.753168 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 17 12:13:02.753174 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 17 12:13:02.753179 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 17 12:13:02.753185 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 17 12:13:02.753190 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 17 12:13:02.753196 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 17 12:13:02.753201 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 17 12:13:02.753206 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 17 12:13:02.753213 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 17 12:13:02.753219 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 17 12:13:02.753224 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 17 12:13:02.753230 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 17 12:13:02.753235 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 17 12:13:02.753241 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 17 12:13:02.753246 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 17 12:13:02.753252 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 17 12:13:02.753257 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 17 12:13:02.753263 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 17 12:13:02.753270 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 17 12:13:02.753275 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 17 12:13:02.753281 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 17 12:13:02.753286 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 17 12:13:02.753291 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 17 12:13:02.753297 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 17 12:13:02.753303 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 17 12:13:02.753308 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 17 12:13:02.753313 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 17 12:13:02.753319 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 17 12:13:02.753326 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 17 12:13:02.753331 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 17 12:13:02.753337 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 17 12:13:02.753342 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 17 12:13:02.753348 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 17 12:13:02.753353 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 17 12:13:02.753359 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 17 12:13:02.753364 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 17 12:13:02.753370 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 17 12:13:02.753376 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 17 12:13:02.753382 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 17 12:13:02.753387 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 17 12:13:02.753392 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 17 12:13:02.753398 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 17 12:13:02.753403 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 17 12:13:02.753409 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 17 12:13:02.753414 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 17 12:13:02.753420 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 17 12:13:02.753425 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 17 12:13:02.753432 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 17 12:13:02.753437 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 17 12:13:02.753442 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 17 12:13:02.753448 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 17 12:13:02.753453 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 17 12:13:02.753459 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 17 12:13:02.753464 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 17 12:13:02.753470 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 17 12:13:02.753475 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:13:02.753482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 17 12:13:02.753487 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:13:02.753493 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 17 12:13:02.753499 kernel: TSC deadline timer available Jan 17 12:13:02.753505 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 17 12:13:02.753510 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 17 12:13:02.753516 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 17 12:13:02.753521 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:13:02.753527 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 17 12:13:02.753534 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 17 12:13:02.753540 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 17 12:13:02.753545 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 17 12:13:02.753551 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 17 12:13:02.753556 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 17 12:13:02.753561 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 17 12:13:02.753567 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 17 12:13:02.753580 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 17 12:13:02.753587 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 17 12:13:02.753594 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 17 12:13:02.753600 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 17 12:13:02.753606 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 17 12:13:02.753612 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 17 12:13:02.753618 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 17 12:13:02.753624 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 17 12:13:02.753630 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 17 12:13:02.753635 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 17 12:13:02.753641 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 17 12:13:02.753649 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:13:02.753656 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:13:02.753662 kernel: random: crng init done Jan 17 12:13:02.753668 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 17 12:13:02.753673 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 17 12:13:02.753679 kernel: printk: log_buf_len min size: 262144 bytes Jan 17 12:13:02.753685 kernel: printk: log_buf_len: 1048576 bytes Jan 17 12:13:02.753691 kernel: printk: early log buf free: 239648(91%) Jan 17 12:13:02.753698 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:13:02.753704 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:13:02.753711 kernel: Fallback order for Node 0: 0 Jan 17 12:13:02.753716 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 17 12:13:02.753722 kernel: Policy zone: DMA32 Jan 17 12:13:02.753728 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:13:02.753734 kernel: Memory: 1936364K/2096628K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 160004K reserved, 0K cma-reserved) Jan 17 12:13:02.753741 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 17 12:13:02.753748 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:13:02.753754 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:13:02.753760 kernel: Dynamic Preempt: voluntary Jan 17 12:13:02.753766 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:13:02.753772 kernel: rcu: RCU event tracing is enabled. Jan 17 12:13:02.753779 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 17 12:13:02.753785 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:13:02.753792 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:13:02.753798 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:13:02.753804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:13:02.753810 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 17 12:13:02.753816 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 17 12:13:02.753822 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 17 12:13:02.753828 kernel: Console: colour VGA+ 80x25 Jan 17 12:13:02.753834 kernel: printk: console [tty0] enabled Jan 17 12:13:02.753840 kernel: printk: console [ttyS0] enabled Jan 17 12:13:02.753847 kernel: ACPI: Core revision 20230628 Jan 17 12:13:02.753853 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 17 12:13:02.753859 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:13:02.753865 kernel: x2apic enabled Jan 17 12:13:02.753871 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:13:02.753877 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:13:02.753883 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 17 12:13:02.755917 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 17 12:13:02.755928 kernel: Disabled fast string operations Jan 17 12:13:02.755938 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:13:02.755944 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:13:02.755950 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:13:02.755956 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:13:02.755962 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:13:02.755968 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 17 12:13:02.755974 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:13:02.755980 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 17 12:13:02.755986 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 17 12:13:02.755994 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:13:02.756000 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:13:02.756006 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:13:02.756012 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 12:13:02.756018 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 12:13:02.756024 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:13:02.756030 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:13:02.756036 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:13:02.756042 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:13:02.756049 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:13:02.756055 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:13:02.756061 kernel: pid_max: default: 131072 minimum: 1024 Jan 17 12:13:02.756067 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:13:02.756073 kernel: landlock: Up and running. Jan 17 12:13:02.756079 kernel: SELinux: Initializing. Jan 17 12:13:02.756085 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:13:02.756091 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:13:02.756097 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 17 12:13:02.756105 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:13:02.756115 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:13:02.756125 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:13:02.756134 kernel: Performance Events: Skylake events, core PMU driver. Jan 17 12:13:02.756140 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 17 12:13:02.756146 kernel: core: CPUID marked event: 'instructions' unavailable Jan 17 12:13:02.756151 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 17 12:13:02.756157 kernel: core: CPUID marked event: 'cache references' unavailable Jan 17 12:13:02.756165 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 17 12:13:02.756171 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 17 12:13:02.756179 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 17 12:13:02.756188 kernel: ... version: 1 Jan 17 12:13:02.756199 kernel: ... bit width: 48 Jan 17 12:13:02.756207 kernel: ... generic registers: 4 Jan 17 12:13:02.756214 kernel: ... value mask: 0000ffffffffffff Jan 17 12:13:02.756219 kernel: ... max period: 000000007fffffff Jan 17 12:13:02.756225 kernel: ... fixed-purpose events: 0 Jan 17 12:13:02.756233 kernel: ... event mask: 000000000000000f Jan 17 12:13:02.756239 kernel: signal: max sigframe size: 1776 Jan 17 12:13:02.756245 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:13:02.756252 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:13:02.756258 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:13:02.756263 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:13:02.756269 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:13:02.756275 kernel: .... node #0, CPUs: #1 Jan 17 12:13:02.756281 kernel: Disabled fast string operations Jan 17 12:13:02.756287 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 17 12:13:02.756294 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 12:13:02.756300 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:13:02.756306 kernel: smpboot: Max logical packages: 128 Jan 17 12:13:02.756312 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 17 12:13:02.756317 kernel: devtmpfs: initialized Jan 17 12:13:02.756323 kernel: x86/mm: Memory block size: 128MB Jan 17 12:13:02.756329 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 17 12:13:02.756335 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:13:02.756341 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 17 12:13:02.756348 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:13:02.756354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:13:02.756360 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:13:02.756366 kernel: audit: type=2000 audit(1737115981.069:1): state=initialized audit_enabled=0 res=1 Jan 17 12:13:02.756372 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:13:02.756378 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:13:02.756384 kernel: cpuidle: using governor menu Jan 17 12:13:02.756390 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 17 12:13:02.756396 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:13:02.756404 kernel: dca service started, version 1.12.1 Jan 17 12:13:02.756410 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 17 12:13:02.756416 kernel: PCI: Using configuration type 1 for base access Jan 17 12:13:02.756422 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:13:02.756428 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:13:02.756434 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:13:02.756440 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:13:02.756445 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:13:02.756452 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:13:02.756460 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:13:02.756469 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:13:02.756478 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:13:02.756487 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:13:02.756493 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 17 12:13:02.756499 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:13:02.756505 kernel: ACPI: Interpreter enabled Jan 17 12:13:02.756511 kernel: ACPI: PM: (supports S0 S1 S5) Jan 17 12:13:02.756517 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:13:02.756525 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:13:02.756531 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:13:02.756537 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 17 12:13:02.756545 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 17 12:13:02.756644 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:13:02.756701 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 17 12:13:02.756751 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 17 12:13:02.756762 kernel: PCI host bridge to bus 0000:00 Jan 17 12:13:02.756814 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:13:02.756859 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 17 12:13:02.760097 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:13:02.760162 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:13:02.760207 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 17 12:13:02.760252 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 17 12:13:02.760317 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 17 12:13:02.760375 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 17 12:13:02.760430 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 17 12:13:02.760484 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 17 12:13:02.760534 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 17 12:13:02.760582 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:13:02.760634 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:13:02.760682 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:13:02.760744 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:13:02.760799 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 17 12:13:02.760863 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 17 12:13:02.760950 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 17 12:13:02.761017 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 17 12:13:02.761072 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 17 12:13:02.761133 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 17 12:13:02.761189 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 17 12:13:02.761246 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 17 12:13:02.761294 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 17 12:13:02.761343 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 17 12:13:02.761395 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 17 12:13:02.761443 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:13:02.761809 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 17 12:13:02.761865 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.761937 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.761996 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762072 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762133 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762183 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762236 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762285 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762348 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762398 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762453 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762503 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762555 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762604 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762657 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762706 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762761 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762810 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762863 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762926 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.763937 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764036 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764112 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764163 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764215 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764264 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764317 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764366 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764422 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764481 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764536 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764585 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764647 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764710 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764768 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764825 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.765951 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766006 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766061 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766112 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766170 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766250 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766318 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766382 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766452 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766503 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766560 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766610 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766663 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766712 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766774 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766826 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766882 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766948 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767001 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767054 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767108 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767157 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767217 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767283 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767354 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767407 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767460 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767510 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767561 kernel: pci_bus 0000:01: extended config space not accessible Jan 17 12:13:02.767614 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:13:02.767664 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 12:13:02.767673 kernel: acpiphp: Slot [32] registered Jan 17 12:13:02.767680 kernel: acpiphp: Slot [33] registered Jan 17 12:13:02.767686 kernel: acpiphp: Slot [34] registered Jan 17 12:13:02.767692 kernel: acpiphp: Slot [35] registered Jan 17 12:13:02.767698 kernel: acpiphp: Slot [36] registered Jan 17 12:13:02.767704 kernel: acpiphp: Slot [37] registered Jan 17 12:13:02.767712 kernel: acpiphp: Slot [38] registered Jan 17 12:13:02.767718 kernel: acpiphp: Slot [39] registered Jan 17 12:13:02.767724 kernel: acpiphp: Slot [40] registered Jan 17 12:13:02.767730 kernel: acpiphp: Slot [41] registered Jan 17 12:13:02.767736 kernel: acpiphp: Slot [42] registered Jan 17 12:13:02.767741 kernel: acpiphp: Slot [43] registered Jan 17 12:13:02.767747 kernel: acpiphp: Slot [44] registered Jan 17 12:13:02.767753 kernel: acpiphp: Slot [45] registered Jan 17 12:13:02.767759 kernel: acpiphp: Slot [46] registered Jan 17 12:13:02.767765 kernel: acpiphp: Slot [47] registered Jan 17 12:13:02.767772 kernel: acpiphp: Slot [48] registered Jan 17 12:13:02.767778 kernel: acpiphp: Slot [49] registered Jan 17 12:13:02.767784 kernel: acpiphp: Slot [50] registered Jan 17 12:13:02.767789 kernel: acpiphp: Slot [51] registered Jan 17 12:13:02.767795 kernel: acpiphp: Slot [52] registered Jan 17 12:13:02.767801 kernel: acpiphp: Slot [53] registered Jan 17 12:13:02.767807 kernel: acpiphp: Slot [54] registered Jan 17 12:13:02.767813 kernel: acpiphp: Slot [55] registered Jan 17 12:13:02.767818 kernel: acpiphp: Slot [56] registered Jan 17 12:13:02.767826 kernel: acpiphp: Slot [57] registered Jan 17 12:13:02.767832 kernel: acpiphp: Slot [58] registered Jan 17 12:13:02.767838 kernel: acpiphp: Slot [59] registered Jan 17 12:13:02.767843 kernel: acpiphp: Slot [60] registered Jan 17 12:13:02.767849 kernel: acpiphp: Slot [61] registered Jan 17 12:13:02.767855 kernel: acpiphp: Slot [62] registered Jan 17 12:13:02.767861 kernel: acpiphp: Slot [63] registered Jan 17 12:13:02.769931 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 17 12:13:02.769986 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 17 12:13:02.770039 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 17 12:13:02.770088 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:13:02.770136 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 17 12:13:02.770184 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 17 12:13:02.770232 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 17 12:13:02.770280 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 17 12:13:02.770339 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 17 12:13:02.770407 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 17 12:13:02.770470 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 17 12:13:02.770536 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 17 12:13:02.770587 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 17 12:13:02.770637 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.770687 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 17 12:13:02.770738 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 17 12:13:02.770786 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 17 12:13:02.770839 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 17 12:13:02.770897 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 17 12:13:02.770948 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 17 12:13:02.770997 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 17 12:13:02.771050 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:13:02.771101 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 17 12:13:02.771149 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 17 12:13:02.771215 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 17 12:13:02.771268 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:13:02.771318 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 17 12:13:02.771367 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 17 12:13:02.771414 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:13:02.771474 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 17 12:13:02.771526 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 17 12:13:02.771588 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:13:02.771642 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 17 12:13:02.771691 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 17 12:13:02.771739 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:13:02.771787 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 17 12:13:02.771835 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 17 12:13:02.773333 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:13:02.773395 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 17 12:13:02.773446 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 17 12:13:02.773494 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:13:02.773552 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 17 12:13:02.773602 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 17 12:13:02.773651 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 17 12:13:02.773717 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 17 12:13:02.773769 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 17 12:13:02.773818 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 17 12:13:02.773869 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 17 12:13:02.773932 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:13:02.773983 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 17 12:13:02.774043 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 17 12:13:02.774094 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 17 12:13:02.774148 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 17 12:13:02.774199 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 17 12:13:02.774249 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 17 12:13:02.774298 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 17 12:13:02.774348 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:13:02.774397 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 17 12:13:02.774447 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 17 12:13:02.774496 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 17 12:13:02.774547 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:13:02.774597 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 17 12:13:02.774645 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 17 12:13:02.774694 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:13:02.774743 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 17 12:13:02.775158 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 17 12:13:02.775221 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:13:02.775277 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 17 12:13:02.775327 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 17 12:13:02.775375 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:13:02.775425 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 17 12:13:02.775474 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 17 12:13:02.775523 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:13:02.775573 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 17 12:13:02.775623 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 17 12:13:02.775674 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:13:02.775724 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 17 12:13:02.775773 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 17 12:13:02.775822 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 17 12:13:02.775883 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:13:02.775944 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 17 12:13:02.775993 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 17 12:13:02.776042 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 17 12:13:02.776093 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:13:02.776143 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 17 12:13:02.776191 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 17 12:13:02.776240 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 17 12:13:02.776300 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:13:02.776351 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 17 12:13:02.776399 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 17 12:13:02.776448 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:13:02.776510 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 17 12:13:02.776560 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 17 12:13:02.776609 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:13:02.776659 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 17 12:13:02.776708 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 17 12:13:02.776756 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:13:02.776846 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 17 12:13:02.779398 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 17 12:13:02.779465 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:13:02.779521 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 17 12:13:02.779572 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 17 12:13:02.779622 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:13:02.779673 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 17 12:13:02.779722 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 17 12:13:02.779770 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 17 12:13:02.779820 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:13:02.780207 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 17 12:13:02.780293 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 17 12:13:02.780363 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 17 12:13:02.780413 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:13:02.780467 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 17 12:13:02.780526 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 17 12:13:02.780575 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:13:02.780628 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 17 12:13:02.780678 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 17 12:13:02.780726 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:13:02.780777 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 17 12:13:02.780826 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 17 12:13:02.780874 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:13:02.780937 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 17 12:13:02.780988 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 17 12:13:02.781045 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:13:02.781096 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 17 12:13:02.781156 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 17 12:13:02.781208 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:13:02.781259 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 17 12:13:02.781307 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 17 12:13:02.781355 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:13:02.781364 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 17 12:13:02.781373 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 17 12:13:02.781379 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 17 12:13:02.781385 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:13:02.781391 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 17 12:13:02.781397 kernel: iommu: Default domain type: Translated Jan 17 12:13:02.781403 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:13:02.781409 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:13:02.781415 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:13:02.781421 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 17 12:13:02.781429 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 17 12:13:02.781478 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 17 12:13:02.781527 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 17 12:13:02.781575 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:13:02.781584 kernel: vgaarb: loaded Jan 17 12:13:02.781591 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 17 12:13:02.781597 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 17 12:13:02.781603 kernel: clocksource: Switched to clocksource tsc-early Jan 17 12:13:02.781609 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:13:02.781617 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:13:02.781623 kernel: pnp: PnP ACPI init Jan 17 12:13:02.781674 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 17 12:13:02.781720 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 17 12:13:02.781765 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 17 12:13:02.781812 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 17 12:13:02.781861 kernel: pnp 00:06: [dma 2] Jan 17 12:13:02.781928 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 17 12:13:02.781976 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 17 12:13:02.782020 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 17 12:13:02.782028 kernel: pnp: PnP ACPI: found 8 devices Jan 17 12:13:02.782034 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:13:02.782040 kernel: NET: Registered PF_INET protocol family Jan 17 12:13:02.782047 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:13:02.782053 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:13:02.782062 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:13:02.782068 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:13:02.782074 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:13:02.782080 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:13:02.782086 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:13:02.782092 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:13:02.782098 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:13:02.782104 kernel: NET: Registered PF_XDP protocol family Jan 17 12:13:02.782154 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 17 12:13:02.782208 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 12:13:02.782259 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 12:13:02.782310 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 12:13:02.782361 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 12:13:02.782412 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 17 12:13:02.782464 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 17 12:13:02.782515 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 17 12:13:02.782564 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 17 12:13:02.782614 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 17 12:13:02.782663 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 17 12:13:02.782712 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 17 12:13:02.782764 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 17 12:13:02.782813 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 17 12:13:02.782862 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 17 12:13:02.783226 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 17 12:13:02.783281 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 17 12:13:02.783332 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 17 12:13:02.783385 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 17 12:13:02.783434 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 17 12:13:02.783482 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 17 12:13:02.783531 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 17 12:13:02.783580 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 17 12:13:02.783629 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:13:02.783680 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:13:02.783729 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.783779 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.783828 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.783876 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.783934 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.783983 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784036 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784086 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784134 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784182 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784245 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784297 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784345 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784399 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784453 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784505 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784554 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784615 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.786932 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.786993 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787051 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787102 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787152 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787205 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787254 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787317 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787369 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787417 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787468 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787516 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787565 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787616 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787666 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787714 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787762 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787810 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787858 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787922 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787974 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788041 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788092 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788141 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788189 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788238 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788287 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788335 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788400 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788451 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788503 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788551 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788600 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788648 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788696 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788744 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788793 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788842 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789216 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789280 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789333 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789382 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789434 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789510 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789563 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789611 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789661 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789710 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789760 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789813 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789863 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789979 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790030 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790078 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790127 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790175 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790224 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790272 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790324 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790372 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790422 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790469 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790518 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790579 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790632 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790682 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790731 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:13:02.790781 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 17 12:13:02.790832 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 17 12:13:02.790880 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 17 12:13:02.790983 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:13:02.791041 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 17 12:13:02.791092 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 17 12:13:02.791139 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 17 12:13:02.791187 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 17 12:13:02.791249 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:13:02.791303 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 17 12:13:02.791352 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 17 12:13:02.791400 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 17 12:13:02.791448 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:13:02.791507 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 17 12:13:02.791557 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 17 12:13:02.791605 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 17 12:13:02.791666 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:13:02.791716 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 17 12:13:02.791767 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 17 12:13:02.791816 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:13:02.791865 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 17 12:13:02.791921 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 17 12:13:02.791969 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:13:02.792022 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 17 12:13:02.792073 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 17 12:13:02.792121 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:13:02.792170 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 17 12:13:02.792219 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 17 12:13:02.792267 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:13:02.792316 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 17 12:13:02.792364 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 17 12:13:02.792413 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:13:02.792465 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 17 12:13:02.792517 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 17 12:13:02.792565 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 17 12:13:02.792614 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 17 12:13:02.792663 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:13:02.792726 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 17 12:13:02.792778 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 17 12:13:02.792827 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 17 12:13:02.792876 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:13:02.794956 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 17 12:13:02.795015 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 17 12:13:02.795070 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 17 12:13:02.795121 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:13:02.795181 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 17 12:13:02.795245 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 17 12:13:02.795300 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:13:02.795350 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 17 12:13:02.795399 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 17 12:13:02.795447 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:13:02.795496 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 17 12:13:02.795547 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 17 12:13:02.795596 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:13:02.795644 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 17 12:13:02.795693 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 17 12:13:02.795741 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:13:02.795790 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 17 12:13:02.795839 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 17 12:13:02.795971 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:13:02.796030 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 17 12:13:02.796079 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 17 12:13:02.796130 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 17 12:13:02.796179 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:13:02.796229 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 17 12:13:02.796278 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 17 12:13:02.796326 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 17 12:13:02.796374 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:13:02.796425 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 17 12:13:02.796474 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 17 12:13:02.796522 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 17 12:13:02.796588 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:13:02.796639 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 17 12:13:02.796687 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 17 12:13:02.796737 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:13:02.796787 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 17 12:13:02.796835 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 17 12:13:02.796883 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:13:02.796943 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 17 12:13:02.797007 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 17 12:13:02.797064 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:13:02.797117 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 17 12:13:02.797167 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 17 12:13:02.797216 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:13:02.797265 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 17 12:13:02.797313 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 17 12:13:02.797363 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:13:02.797413 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 17 12:13:02.797462 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 17 12:13:02.797521 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 17 12:13:02.797574 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:13:02.797624 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 17 12:13:02.797673 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 17 12:13:02.797721 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 17 12:13:02.797769 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:13:02.797830 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 17 12:13:02.797882 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 17 12:13:02.797939 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:13:02.798262 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 17 12:13:02.798315 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 17 12:13:02.798369 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:13:02.798434 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 17 12:13:02.798720 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 17 12:13:02.798784 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:13:02.798839 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 17 12:13:02.798897 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 17 12:13:02.798956 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:13:02.799011 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 17 12:13:02.799060 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 17 12:13:02.799111 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:13:02.799162 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 17 12:13:02.799211 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 17 12:13:02.799260 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:13:02.799309 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 17 12:13:02.799353 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 17 12:13:02.799396 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 17 12:13:02.799439 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 17 12:13:02.799481 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 17 12:13:02.799531 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 17 12:13:02.799577 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 17 12:13:02.799622 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:13:02.799666 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 17 12:13:02.799710 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 17 12:13:02.799755 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 17 12:13:02.799799 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 17 12:13:02.799846 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 17 12:13:02.799926 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 17 12:13:02.799973 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 17 12:13:02.800018 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:13:02.800066 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 17 12:13:02.800112 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 17 12:13:02.800155 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:13:02.800206 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 17 12:13:02.800251 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 17 12:13:02.800294 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:13:02.800343 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 17 12:13:02.800387 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:13:02.800436 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 17 12:13:02.800483 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:13:02.800534 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 17 12:13:02.800580 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:13:02.800628 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 17 12:13:02.800673 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:13:02.800725 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 17 12:13:02.800780 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:13:02.800837 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 17 12:13:02.800883 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 17 12:13:02.801350 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:13:02.801405 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 17 12:13:02.801452 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 17 12:13:02.801501 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:13:02.801550 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 17 12:13:02.801595 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 17 12:13:02.801643 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:13:02.801692 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 17 12:13:02.801737 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:13:02.801787 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 17 12:13:02.801836 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:13:02.801885 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 17 12:13:02.801945 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:13:02.801994 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 17 12:13:02.802039 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:13:02.802088 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 17 12:13:02.802136 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:13:02.802184 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 17 12:13:02.802230 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 17 12:13:02.802275 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:13:02.802326 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 17 12:13:02.802372 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 17 12:13:02.802417 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:13:02.802468 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 17 12:13:02.802513 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 17 12:13:02.802558 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:13:02.802606 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 17 12:13:02.802651 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:13:02.802699 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 17 12:13:02.802976 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:13:02.803027 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 17 12:13:02.803073 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:13:02.803123 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 17 12:13:02.803168 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:13:02.803216 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 17 12:13:02.803261 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:13:02.803315 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 17 12:13:02.803359 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 17 12:13:02.803403 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:13:02.803455 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 17 12:13:02.803500 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 17 12:13:02.803547 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:13:02.803597 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 17 12:13:02.803642 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:13:02.803692 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 17 12:13:02.803736 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:13:02.803785 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 17 12:13:02.803831 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:13:02.803883 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 17 12:13:02.803991 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:13:02.804046 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 17 12:13:02.804091 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:13:02.804159 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 17 12:13:02.804569 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:13:02.804632 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:13:02.804644 kernel: PCI: CLS 32 bytes, default 64 Jan 17 12:13:02.804651 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:13:02.804658 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 17 12:13:02.804664 kernel: clocksource: Switched to clocksource tsc Jan 17 12:13:02.804671 kernel: Initialise system trusted keyrings Jan 17 12:13:02.804677 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:13:02.804683 kernel: Key type asymmetric registered Jan 17 12:13:02.804691 kernel: Asymmetric key parser 'x509' registered Jan 17 12:13:02.804698 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:13:02.804704 kernel: io scheduler mq-deadline registered Jan 17 12:13:02.804710 kernel: io scheduler kyber registered Jan 17 12:13:02.804717 kernel: io scheduler bfq registered Jan 17 12:13:02.804771 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 17 12:13:02.804823 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.804876 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 17 12:13:02.804941 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.804997 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 17 12:13:02.805048 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805099 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 17 12:13:02.805363 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805422 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 17 12:13:02.805483 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805540 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 17 12:13:02.805591 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805645 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 17 12:13:02.805695 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805746 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 17 12:13:02.805799 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805849 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 17 12:13:02.806337 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806412 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 17 12:13:02.806483 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806548 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 17 12:13:02.806602 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806665 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 17 12:13:02.806725 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806786 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 17 12:13:02.806839 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806914 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 17 12:13:02.806978 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807051 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 17 12:13:02.807121 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807181 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 17 12:13:02.807233 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807284 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 17 12:13:02.807337 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807387 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 17 12:13:02.807453 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807512 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 17 12:13:02.807562 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807614 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 17 12:13:02.807664 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807718 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 17 12:13:02.807768 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807819 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 17 12:13:02.807872 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808001 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 17 12:13:02.808067 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808118 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 17 12:13:02.808175 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808226 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 17 12:13:02.808276 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808326 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 17 12:13:02.808377 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808428 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 17 12:13:02.808489 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808546 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 17 12:13:02.808595 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808646 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 17 12:13:02.808699 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808749 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 17 12:13:02.808799 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808848 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 17 12:13:02.808904 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808958 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 17 12:13:02.809008 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.809018 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:13:02.809025 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:13:02.809032 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:13:02.809039 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 17 12:13:02.809045 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:13:02.809053 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:13:02.809104 kernel: rtc_cmos 00:01: registered as rtc0 Jan 17 12:13:02.809152 kernel: rtc_cmos 00:01: setting system clock to 2025-01-17T12:13:02 UTC (1737115982) Jan 17 12:13:02.809167 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:13:02.809229 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 17 12:13:02.809239 kernel: intel_pstate: CPU model not supported Jan 17 12:13:02.809245 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:13:02.809252 kernel: Segment Routing with IPv6 Jan 17 12:13:02.809260 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:13:02.809267 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:13:02.809273 kernel: Key type dns_resolver registered Jan 17 12:13:02.809280 kernel: IPI shorthand broadcast: enabled Jan 17 12:13:02.809286 kernel: sched_clock: Marking stable (965004083, 239576276)->(1268239736, -63659377) Jan 17 12:13:02.809293 kernel: registered taskstats version 1 Jan 17 12:13:02.809300 kernel: Loading compiled-in X.509 certificates Jan 17 12:13:02.809306 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:13:02.809313 kernel: Key type .fscrypt registered Jan 17 12:13:02.809321 kernel: Key type fscrypt-provisioning registered Jan 17 12:13:02.809327 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:13:02.809333 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:13:02.809340 kernel: ima: No architecture policies found Jan 17 12:13:02.809346 kernel: clk: Disabling unused clocks Jan 17 12:13:02.809352 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:13:02.809359 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:13:02.809365 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:13:02.809371 kernel: Run /init as init process Jan 17 12:13:02.809379 kernel: with arguments: Jan 17 12:13:02.809386 kernel: /init Jan 17 12:13:02.809392 kernel: with environment: Jan 17 12:13:02.809398 kernel: HOME=/ Jan 17 12:13:02.809404 kernel: TERM=linux Jan 17 12:13:02.809410 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:13:02.809418 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:13:02.809426 systemd[1]: Detected virtualization vmware. Jan 17 12:13:02.809434 systemd[1]: Detected architecture x86-64. Jan 17 12:13:02.809440 systemd[1]: Running in initrd. Jan 17 12:13:02.809447 systemd[1]: No hostname configured, using default hostname. Jan 17 12:13:02.809453 systemd[1]: Hostname set to . Jan 17 12:13:02.809462 systemd[1]: Initializing machine ID from random generator. Jan 17 12:13:02.809472 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:13:02.809482 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:13:02.809489 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:13:02.809498 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:13:02.809505 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:13:02.809512 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:13:02.809519 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:13:02.809527 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:13:02.809533 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:13:02.809541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:13:02.809553 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:13:02.809565 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:13:02.809572 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:13:02.809579 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:13:02.809585 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:13:02.809592 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:13:02.809598 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:13:02.809605 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:13:02.809613 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:13:02.809620 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:13:02.809627 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:13:02.809633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:13:02.809639 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:13:02.809646 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:13:02.809653 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:13:02.809659 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:13:02.809667 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:13:02.809675 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:13:02.809682 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:13:02.809702 systemd-journald[214]: Collecting audit messages is disabled. Jan 17 12:13:02.809719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:02.809727 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:13:02.809734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:13:02.809740 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:13:02.809748 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:13:02.809756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:13:02.809763 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:13:02.809770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:13:02.809777 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:02.809783 kernel: Bridge firewalling registered Jan 17 12:13:02.809790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:13:02.809797 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:13:02.809804 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:13:02.809811 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:13:02.809819 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:02.809826 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:02.809833 systemd-journald[214]: Journal started Jan 17 12:13:02.809849 systemd-journald[214]: Runtime Journal (/run/log/journal/6f36f4e4e9e447c8b65fd678048ad031) is 4.8M, max 38.6M, 33.8M free. Jan 17 12:13:02.765044 systemd-modules-load[215]: Inserted module 'overlay' Jan 17 12:13:02.786523 systemd-modules-load[215]: Inserted module 'br_netfilter' Jan 17 12:13:02.815024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:13:02.815048 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:13:02.817362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:13:02.818958 dracut-cmdline[235]: dracut-dracut-053 Jan 17 12:13:02.820219 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:13:02.825374 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:13:02.829986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:13:02.849750 systemd-resolved[271]: Positive Trust Anchors: Jan 17 12:13:02.849762 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:13:02.849786 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:13:02.851683 systemd-resolved[271]: Defaulting to hostname 'linux'. Jan 17 12:13:02.852551 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:13:02.852694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:13:02.873901 kernel: SCSI subsystem initialized Jan 17 12:13:02.879899 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:13:02.887908 kernel: iscsi: registered transport (tcp) Jan 17 12:13:02.901246 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:13:02.901296 kernel: QLogic iSCSI HBA Driver Jan 17 12:13:02.921392 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:13:02.929015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:13:02.944788 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:13:02.944838 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:13:02.944847 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:13:02.975917 kernel: raid6: avx2x4 gen() 51168 MB/s Jan 17 12:13:02.992931 kernel: raid6: avx2x2 gen() 51760 MB/s Jan 17 12:13:03.010164 kernel: raid6: avx2x1 gen() 40876 MB/s Jan 17 12:13:03.010222 kernel: raid6: using algorithm avx2x2 gen() 51760 MB/s Jan 17 12:13:03.028160 kernel: raid6: .... xor() 31149 MB/s, rmw enabled Jan 17 12:13:03.028217 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:13:03.041912 kernel: xor: automatically using best checksumming function avx Jan 17 12:13:03.141911 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:13:03.147628 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:13:03.152015 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:13:03.160088 systemd-udevd[431]: Using default interface naming scheme 'v255'. Jan 17 12:13:03.162630 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:13:03.170065 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:13:03.177032 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Jan 17 12:13:03.194268 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:13:03.199065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:13:03.269935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:13:03.274032 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:13:03.281630 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:13:03.282287 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:13:03.282763 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:13:03.284120 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:13:03.289018 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:13:03.298416 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:13:03.341909 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 17 12:13:03.344760 kernel: vmw_pvscsi: using 64bit dma Jan 17 12:13:03.344790 kernel: vmw_pvscsi: max_id: 16 Jan 17 12:13:03.344798 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 17 12:13:03.350926 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 17 12:13:03.355220 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 17 12:13:03.355255 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 17 12:13:03.355264 kernel: vmw_pvscsi: using MSI-X Jan 17 12:13:03.356512 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 17 12:13:03.357491 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 17 12:13:03.360570 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 17 12:13:03.360591 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 17 12:13:03.379281 kernel: libata version 3.00 loaded. Jan 17 12:13:03.379297 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 17 12:13:03.379385 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:13:03.379394 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 17 12:13:03.388867 kernel: scsi host1: ata_piix Jan 17 12:13:03.388964 kernel: scsi host2: ata_piix Jan 17 12:13:03.389025 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 17 12:13:03.389034 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 17 12:13:03.389041 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:13:03.389049 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 17 12:13:03.389121 kernel: AES CTR mode by8 optimization enabled Jan 17 12:13:03.391829 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:13:03.391916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:03.392418 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:13:03.392525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:13:03.392619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:03.392755 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:03.396063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:03.410899 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:03.413997 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:13:03.425401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:03.552906 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 17 12:13:03.558899 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 17 12:13:03.571324 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 17 12:13:03.577849 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:13:03.577932 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 17 12:13:03.578000 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 17 12:13:03.578099 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 17 12:13:03.578159 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:13:03.578168 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:13:03.578226 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 17 12:13:03.598922 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:13:03.598936 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:13:03.777906 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (481) Jan 17 12:13:03.784435 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 17 12:13:03.790119 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 17 12:13:03.799109 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 17 12:13:03.833903 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (489) Jan 17 12:13:03.839255 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 17 12:13:03.839423 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 17 12:13:03.848990 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:13:04.376905 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:13:04.424925 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:13:05.447903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:13:05.448750 disk-uuid[593]: The operation has completed successfully. Jan 17 12:13:05.479174 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:13:05.479233 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:13:05.486047 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:13:05.489760 sh[610]: Success Jan 17 12:13:05.497923 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:13:05.533875 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:13:05.542185 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:13:05.544112 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:13:05.560909 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:13:05.560944 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:05.560952 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:13:05.560960 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:13:05.562329 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:13:05.568904 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:13:05.569513 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:13:05.579025 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 17 12:13:05.580343 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:13:05.596907 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:05.596943 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:05.596951 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:13:05.601900 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:13:05.608577 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:13:05.610900 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:05.618143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:13:05.622326 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:13:05.642789 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 17 12:13:05.651100 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:13:05.698462 ignition[672]: Ignition 2.19.0 Jan 17 12:13:05.698469 ignition[672]: Stage: fetch-offline Jan 17 12:13:05.698492 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:05.698498 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:05.698568 ignition[672]: parsed url from cmdline: "" Jan 17 12:13:05.698569 ignition[672]: no config URL provided Jan 17 12:13:05.698573 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:13:05.698578 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:13:05.698941 ignition[672]: config successfully fetched Jan 17 12:13:05.698959 ignition[672]: parsing config with SHA512: fb9cf2678ec862f5f690a519694ba27fd6941f98e785d0bf41cf610e27dfa6053d1754d2cbb0f36b76d9926dc6edd41fe94fdd6cba5ff460f7195db7593ff065 Jan 17 12:13:05.702793 unknown[672]: fetched base config from "system" Jan 17 12:13:05.702944 unknown[672]: fetched user config from "vmware" Jan 17 12:13:05.703377 ignition[672]: fetch-offline: fetch-offline passed Jan 17 12:13:05.703554 ignition[672]: Ignition finished successfully Jan 17 12:13:05.704251 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:13:05.720234 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:13:05.725985 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:13:05.738873 systemd-networkd[806]: lo: Link UP Jan 17 12:13:05.738880 systemd-networkd[806]: lo: Gained carrier Jan 17 12:13:05.739737 systemd-networkd[806]: Enumeration completed Jan 17 12:13:05.739868 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:13:05.740083 systemd-networkd[806]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 17 12:13:05.740132 systemd[1]: Reached target network.target - Network. Jan 17 12:13:05.740358 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:13:05.743408 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 17 12:13:05.743546 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 17 12:13:05.743777 systemd-networkd[806]: ens192: Link UP Jan 17 12:13:05.743782 systemd-networkd[806]: ens192: Gained carrier Jan 17 12:13:05.745743 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:13:05.753037 ignition[808]: Ignition 2.19.0 Jan 17 12:13:05.753044 ignition[808]: Stage: kargs Jan 17 12:13:05.753185 ignition[808]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:05.753192 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:05.753780 ignition[808]: kargs: kargs passed Jan 17 12:13:05.753806 ignition[808]: Ignition finished successfully Jan 17 12:13:05.754883 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:13:05.759004 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:13:05.765658 ignition[815]: Ignition 2.19.0 Jan 17 12:13:05.765668 ignition[815]: Stage: disks Jan 17 12:13:05.765860 ignition[815]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:05.765867 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:05.767178 ignition[815]: disks: disks passed Jan 17 12:13:05.767206 ignition[815]: Ignition finished successfully Jan 17 12:13:05.767936 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:13:05.768294 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:13:05.768525 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:13:05.768738 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:13:05.768946 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:13:05.769153 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:13:05.771967 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:13:05.784639 systemd-fsck[824]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:13:05.786616 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:13:05.795985 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:13:05.834189 systemd-resolved[271]: Detected conflict on linux IN A 139.178.70.104 Jan 17 12:13:05.834199 systemd-resolved[271]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Jan 17 12:13:05.861906 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:13:05.862201 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:13:05.862654 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:13:05.873991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:13:05.875288 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:13:05.875585 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:13:05.875614 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:13:05.875628 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:13:05.879825 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:13:05.881597 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:13:05.881978 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (832) Jan 17 12:13:05.886018 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:05.886042 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:05.886051 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:13:05.889941 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:13:05.890646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:13:05.909195 initrd-setup-root[856]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:13:05.912325 initrd-setup-root[863]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:13:05.914759 initrd-setup-root[870]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:13:05.916935 initrd-setup-root[877]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:13:05.969552 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:13:05.975987 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:13:05.978409 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:13:05.981896 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:05.994136 ignition[945]: INFO : Ignition 2.19.0 Jan 17 12:13:05.994136 ignition[945]: INFO : Stage: mount Jan 17 12:13:05.994136 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:05.994136 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:05.995387 ignition[945]: INFO : mount: mount passed Jan 17 12:13:05.995387 ignition[945]: INFO : Ignition finished successfully Jan 17 12:13:05.996077 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:13:06.000023 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:13:06.000408 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:13:06.558108 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:13:06.563077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:13:06.569905 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (956) Jan 17 12:13:06.573106 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:06.573159 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:06.573167 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:13:06.578910 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:13:06.580190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:13:06.595510 ignition[973]: INFO : Ignition 2.19.0 Jan 17 12:13:06.595786 ignition[973]: INFO : Stage: files Jan 17 12:13:06.595994 ignition[973]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:06.596119 ignition[973]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:06.596855 ignition[973]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:13:06.605784 ignition[973]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:13:06.605948 ignition[973]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:13:06.647244 ignition[973]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:13:06.647592 ignition[973]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:13:06.648036 unknown[973]: wrote ssh authorized keys file for user: core Jan 17 12:13:06.648311 ignition[973]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:13:06.675639 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:13:06.675639 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:13:06.675639 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:13:06.675639 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:13:06.724876 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:13:07.150085 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:13:07.335316 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:07.335316 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 17 12:13:07.335316 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 17 12:13:07.335316 ignition[973]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:13:07.341807 ignition[973]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:13:07.341807 ignition[973]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 17 12:13:07.341807 ignition[973]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:13:07.433980 systemd-networkd[806]: ens192: Gained IPv6LL Jan 17 12:13:07.986383 ignition[973]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:13:07.988897 ignition[973]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:13:07.988897 ignition[973]: INFO : files: files passed Jan 17 12:13:07.988897 ignition[973]: INFO : Ignition finished successfully Jan 17 12:13:07.990694 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:13:07.994990 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:13:07.996233 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:13:08.008492 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:13:08.008492 initrd-setup-root-after-ignition[1002]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:13:08.009589 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:13:08.010384 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:13:08.010857 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:13:08.014002 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:13:08.014479 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:13:08.014528 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:13:08.029013 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:13:08.029078 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:13:08.029478 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:13:08.029598 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:13:08.029798 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:13:08.030242 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:13:08.040118 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:13:08.044990 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:13:08.050446 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:13:08.050616 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:13:08.050838 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:13:08.051221 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:13:08.051289 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:13:08.051558 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:13:08.051805 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:13:08.051992 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:13:08.052178 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:13:08.052384 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:13:08.052597 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:13:08.052788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:13:08.053005 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:13:08.053227 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:13:08.053416 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:13:08.053589 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:13:08.053648 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:13:08.053930 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:13:08.054156 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:13:08.054352 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:13:08.054395 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:13:08.054563 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:13:08.054624 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:13:08.054882 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:13:08.054951 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:13:08.055196 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:13:08.055339 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:13:08.058911 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:13:08.059080 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:13:08.059307 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:13:08.059482 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:13:08.059531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:13:08.059676 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:13:08.059722 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:13:08.059879 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:13:08.059956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:13:08.060202 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:13:08.060261 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:13:08.069130 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:13:08.069360 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:13:08.069606 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:13:08.072019 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:13:08.072262 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:13:08.072489 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:13:08.072826 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:13:08.073059 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:13:08.076011 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:13:08.076185 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:13:08.077279 ignition[1027]: INFO : Ignition 2.19.0 Jan 17 12:13:08.077279 ignition[1027]: INFO : Stage: umount Jan 17 12:13:08.077279 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:08.077279 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:08.077279 ignition[1027]: INFO : umount: umount passed Jan 17 12:13:08.077279 ignition[1027]: INFO : Ignition finished successfully Jan 17 12:13:08.082218 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:13:08.082270 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:13:08.082647 systemd[1]: Stopped target network.target - Network. Jan 17 12:13:08.083269 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:13:08.083302 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:13:08.083418 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:13:08.083440 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:13:08.083549 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:13:08.083570 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:13:08.083675 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:13:08.083695 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:13:08.083874 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:13:08.085171 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:13:08.092014 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:13:08.092095 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:13:08.092989 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:13:08.093019 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:13:08.093541 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:13:08.093596 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:13:08.093896 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:13:08.093926 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:13:08.100075 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:13:08.100288 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:13:08.100437 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:13:08.100699 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 17 12:13:08.100852 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 17 12:13:08.101098 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:13:08.101119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:08.101329 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:13:08.101350 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:13:08.101717 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:13:08.106847 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:13:08.106916 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:13:08.111183 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:13:08.111264 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:13:08.111715 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:13:08.111740 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:13:08.111953 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:13:08.111969 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:13:08.112125 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:13:08.112147 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:13:08.112427 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:13:08.112448 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:13:08.112730 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:13:08.112751 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:08.116982 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:13:08.117104 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:13:08.117135 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:13:08.117267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:13:08.117288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:08.117996 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:13:08.119883 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:13:08.119949 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:13:08.480057 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:13:08.480132 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:13:08.480449 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:13:08.480565 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:13:08.480591 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:13:08.484975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:13:08.510999 systemd[1]: Switching root. Jan 17 12:13:08.532066 systemd-journald[214]: Journal stopped Jan 17 12:13:02.749577 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:13:02.749594 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:13:02.749601 kernel: Disabled fast string operations Jan 17 12:13:02.749605 kernel: BIOS-provided physical RAM map: Jan 17 12:13:02.749609 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 17 12:13:02.749613 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 17 12:13:02.749619 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 17 12:13:02.749623 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 17 12:13:02.749628 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 17 12:13:02.749632 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 17 12:13:02.749636 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 17 12:13:02.749640 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 17 12:13:02.749644 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 17 12:13:02.749648 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 17 12:13:02.749655 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 17 12:13:02.749660 kernel: NX (Execute Disable) protection: active Jan 17 12:13:02.749665 kernel: APIC: Static calls initialized Jan 17 12:13:02.749669 kernel: SMBIOS 2.7 present. Jan 17 12:13:02.749674 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 17 12:13:02.749679 kernel: vmware: hypercall mode: 0x00 Jan 17 12:13:02.749684 kernel: Hypervisor detected: VMware Jan 17 12:13:02.749689 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 17 12:13:02.749695 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 17 12:13:02.749704 kernel: vmware: using clock offset of 2760972157 ns Jan 17 12:13:02.749709 kernel: tsc: Detected 3408.000 MHz processor Jan 17 12:13:02.749714 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:13:02.749720 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:13:02.749725 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 17 12:13:02.749729 kernel: total RAM covered: 3072M Jan 17 12:13:02.749734 kernel: Found optimal setting for mtrr clean up Jan 17 12:13:02.749740 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 17 12:13:02.749746 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 17 12:13:02.749751 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:13:02.749756 kernel: Using GB pages for direct mapping Jan 17 12:13:02.749761 kernel: ACPI: Early table checksum verification disabled Jan 17 12:13:02.749766 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 17 12:13:02.749770 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 17 12:13:02.749775 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 17 12:13:02.749780 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 17 12:13:02.749791 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 17 12:13:02.749800 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 17 12:13:02.749805 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 17 12:13:02.749811 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 17 12:13:02.749816 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 17 12:13:02.749821 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 17 12:13:02.749827 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 17 12:13:02.749833 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 17 12:13:02.749838 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 17 12:13:02.749843 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 17 12:13:02.749848 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 17 12:13:02.749853 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 17 12:13:02.749858 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 17 12:13:02.749864 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 17 12:13:02.749869 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 17 12:13:02.749874 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 17 12:13:02.749880 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 17 12:13:02.749898 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 17 12:13:02.749906 kernel: system APIC only can use physical flat Jan 17 12:13:02.749912 kernel: APIC: Switched APIC routing to: physical flat Jan 17 12:13:02.749917 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:13:02.749922 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 12:13:02.749927 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 12:13:02.749932 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 12:13:02.749937 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 12:13:02.749944 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 12:13:02.749949 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 12:13:02.749954 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 12:13:02.749959 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 17 12:13:02.749964 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 17 12:13:02.749969 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 17 12:13:02.749974 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 17 12:13:02.749979 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 17 12:13:02.749984 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 17 12:13:02.749989 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 17 12:13:02.749995 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 17 12:13:02.750000 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 17 12:13:02.750005 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 17 12:13:02.750010 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 17 12:13:02.750015 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 17 12:13:02.750020 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 17 12:13:02.750026 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 17 12:13:02.750031 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 17 12:13:02.750036 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 17 12:13:02.750041 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 17 12:13:02.750047 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 17 12:13:02.750052 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 17 12:13:02.750057 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 17 12:13:02.750062 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 17 12:13:02.750067 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 17 12:13:02.750072 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 17 12:13:02.750077 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 17 12:13:02.750082 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 17 12:13:02.750087 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 17 12:13:02.750093 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 17 12:13:02.750098 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 17 12:13:02.750104 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 17 12:13:02.750109 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 17 12:13:02.750114 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 17 12:13:02.750119 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 17 12:13:02.750124 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 17 12:13:02.750129 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 17 12:13:02.750134 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 17 12:13:02.750139 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 17 12:13:02.750144 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 17 12:13:02.750150 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 17 12:13:02.750156 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 17 12:13:02.750161 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 17 12:13:02.750166 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 17 12:13:02.750171 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 17 12:13:02.750176 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 17 12:13:02.750181 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 17 12:13:02.750186 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 17 12:13:02.750191 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 17 12:13:02.750196 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 17 12:13:02.750201 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 17 12:13:02.750207 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 17 12:13:02.750212 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 17 12:13:02.750217 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 17 12:13:02.750227 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 17 12:13:02.750232 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 17 12:13:02.750238 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 17 12:13:02.750243 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 17 12:13:02.750248 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 17 12:13:02.750255 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 17 12:13:02.750260 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 17 12:13:02.750266 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 17 12:13:02.750271 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 17 12:13:02.750277 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 17 12:13:02.750282 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 17 12:13:02.750287 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 17 12:13:02.750293 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 17 12:13:02.750298 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 17 12:13:02.750303 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 17 12:13:02.750310 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 17 12:13:02.750315 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 17 12:13:02.750321 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 17 12:13:02.750326 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 17 12:13:02.750331 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 17 12:13:02.750337 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 17 12:13:02.750342 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 17 12:13:02.750347 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 17 12:13:02.750353 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 17 12:13:02.750358 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 17 12:13:02.750365 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 17 12:13:02.750370 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 17 12:13:02.750376 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 17 12:13:02.750381 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 17 12:13:02.750386 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 17 12:13:02.750392 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 17 12:13:02.750397 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 17 12:13:02.750402 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 17 12:13:02.750408 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 17 12:13:02.750413 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 17 12:13:02.750418 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 17 12:13:02.750425 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 17 12:13:02.750430 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 17 12:13:02.750436 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 17 12:13:02.750441 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 17 12:13:02.750446 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 17 12:13:02.750452 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 17 12:13:02.750457 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 17 12:13:02.750463 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 17 12:13:02.750468 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 17 12:13:02.750473 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 17 12:13:02.750480 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 17 12:13:02.750485 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 17 12:13:02.750491 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 17 12:13:02.750496 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 17 12:13:02.750501 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 17 12:13:02.750507 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 17 12:13:02.750512 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 17 12:13:02.750518 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 17 12:13:02.750523 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 17 12:13:02.750530 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 17 12:13:02.750535 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 17 12:13:02.750540 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 17 12:13:02.750546 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 17 12:13:02.750551 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 17 12:13:02.750557 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 17 12:13:02.750562 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 17 12:13:02.750567 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 17 12:13:02.750573 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 17 12:13:02.750578 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 17 12:13:02.750583 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 17 12:13:02.750590 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 17 12:13:02.750595 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 17 12:13:02.750601 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 17 12:13:02.750606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:13:02.750612 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:13:02.750617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 17 12:13:02.750623 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 17 12:13:02.750629 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 17 12:13:02.750634 kernel: Zone ranges: Jan 17 12:13:02.750641 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:13:02.750647 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 17 12:13:02.750652 kernel: Normal empty Jan 17 12:13:02.750657 kernel: Movable zone start for each node Jan 17 12:13:02.750663 kernel: Early memory node ranges Jan 17 12:13:02.750669 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 17 12:13:02.750674 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 17 12:13:02.750679 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 17 12:13:02.750685 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 17 12:13:02.750692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:13:02.750698 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 17 12:13:02.750703 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 17 12:13:02.750709 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 17 12:13:02.750714 kernel: system APIC only can use physical flat Jan 17 12:13:02.750720 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 17 12:13:02.750725 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 17 12:13:02.750731 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 17 12:13:02.750736 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 17 12:13:02.750741 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 17 12:13:02.750748 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 17 12:13:02.750754 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 17 12:13:02.750759 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 17 12:13:02.750764 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 17 12:13:02.750770 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 17 12:13:02.750775 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 17 12:13:02.750781 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 17 12:13:02.750786 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 17 12:13:02.750791 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 17 12:13:02.750797 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 17 12:13:02.750803 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 17 12:13:02.750809 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 17 12:13:02.750814 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 17 12:13:02.750820 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 17 12:13:02.750825 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 17 12:13:02.750831 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 17 12:13:02.750836 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 17 12:13:02.750842 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 17 12:13:02.750847 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 17 12:13:02.750854 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 17 12:13:02.750860 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 17 12:13:02.750865 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 17 12:13:02.750871 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 17 12:13:02.750876 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 17 12:13:02.750882 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 17 12:13:02.752920 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 17 12:13:02.752930 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 17 12:13:02.752936 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 17 12:13:02.752941 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 17 12:13:02.752949 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 17 12:13:02.752955 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 17 12:13:02.752960 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 17 12:13:02.752966 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 17 12:13:02.752971 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 17 12:13:02.752977 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 17 12:13:02.752982 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 17 12:13:02.752988 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 17 12:13:02.752993 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 17 12:13:02.753000 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 17 12:13:02.753005 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 17 12:13:02.753011 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 17 12:13:02.753016 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 17 12:13:02.753022 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 17 12:13:02.753027 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 17 12:13:02.753033 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 17 12:13:02.753038 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 17 12:13:02.753044 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 17 12:13:02.753049 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 17 12:13:02.753056 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 17 12:13:02.753062 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 17 12:13:02.753067 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 17 12:13:02.753073 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 17 12:13:02.753078 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 17 12:13:02.753084 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 17 12:13:02.753089 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 17 12:13:02.753095 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 17 12:13:02.753100 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 17 12:13:02.753107 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 17 12:13:02.753112 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 17 12:13:02.753118 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 17 12:13:02.753123 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 17 12:13:02.753129 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 17 12:13:02.753134 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 17 12:13:02.753139 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 17 12:13:02.753145 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 17 12:13:02.753150 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 17 12:13:02.753156 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 17 12:13:02.753163 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 17 12:13:02.753168 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 17 12:13:02.753174 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 17 12:13:02.753179 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 17 12:13:02.753185 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 17 12:13:02.753190 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 17 12:13:02.753196 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 17 12:13:02.753201 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 17 12:13:02.753206 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 17 12:13:02.753213 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 17 12:13:02.753219 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 17 12:13:02.753224 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 17 12:13:02.753230 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 17 12:13:02.753235 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 17 12:13:02.753241 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 17 12:13:02.753246 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 17 12:13:02.753252 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 17 12:13:02.753257 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 17 12:13:02.753263 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 17 12:13:02.753270 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 17 12:13:02.753275 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 17 12:13:02.753281 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 17 12:13:02.753286 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 17 12:13:02.753291 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 17 12:13:02.753297 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 17 12:13:02.753303 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 17 12:13:02.753308 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 17 12:13:02.753313 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 17 12:13:02.753319 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 17 12:13:02.753326 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 17 12:13:02.753331 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 17 12:13:02.753337 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 17 12:13:02.753342 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 17 12:13:02.753348 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 17 12:13:02.753353 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 17 12:13:02.753359 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 17 12:13:02.753364 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 17 12:13:02.753370 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 17 12:13:02.753376 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 17 12:13:02.753382 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 17 12:13:02.753387 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 17 12:13:02.753392 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 17 12:13:02.753398 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 17 12:13:02.753403 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 17 12:13:02.753409 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 17 12:13:02.753414 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 17 12:13:02.753420 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 17 12:13:02.753425 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 17 12:13:02.753432 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 17 12:13:02.753437 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 17 12:13:02.753442 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 17 12:13:02.753448 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 17 12:13:02.753453 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 17 12:13:02.753459 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 17 12:13:02.753464 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 17 12:13:02.753470 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 17 12:13:02.753475 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:13:02.753482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 17 12:13:02.753487 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:13:02.753493 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 17 12:13:02.753499 kernel: TSC deadline timer available Jan 17 12:13:02.753505 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 17 12:13:02.753510 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 17 12:13:02.753516 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 17 12:13:02.753521 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:13:02.753527 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 17 12:13:02.753534 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 17 12:13:02.753540 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 17 12:13:02.753545 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 17 12:13:02.753551 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 17 12:13:02.753556 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 17 12:13:02.753561 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 17 12:13:02.753567 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 17 12:13:02.753580 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 17 12:13:02.753587 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 17 12:13:02.753594 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 17 12:13:02.753600 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 17 12:13:02.753606 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 17 12:13:02.753612 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 17 12:13:02.753618 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 17 12:13:02.753624 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 17 12:13:02.753630 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 17 12:13:02.753635 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 17 12:13:02.753641 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 17 12:13:02.753649 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:13:02.753656 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:13:02.753662 kernel: random: crng init done Jan 17 12:13:02.753668 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 17 12:13:02.753673 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 17 12:13:02.753679 kernel: printk: log_buf_len min size: 262144 bytes Jan 17 12:13:02.753685 kernel: printk: log_buf_len: 1048576 bytes Jan 17 12:13:02.753691 kernel: printk: early log buf free: 239648(91%) Jan 17 12:13:02.753698 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:13:02.753704 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:13:02.753711 kernel: Fallback order for Node 0: 0 Jan 17 12:13:02.753716 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 17 12:13:02.753722 kernel: Policy zone: DMA32 Jan 17 12:13:02.753728 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:13:02.753734 kernel: Memory: 1936364K/2096628K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 160004K reserved, 0K cma-reserved) Jan 17 12:13:02.753741 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 17 12:13:02.753748 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:13:02.753754 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:13:02.753760 kernel: Dynamic Preempt: voluntary Jan 17 12:13:02.753766 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:13:02.753772 kernel: rcu: RCU event tracing is enabled. Jan 17 12:13:02.753779 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 17 12:13:02.753785 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:13:02.753792 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:13:02.753798 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:13:02.753804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:13:02.753810 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 17 12:13:02.753816 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 17 12:13:02.753822 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 17 12:13:02.753828 kernel: Console: colour VGA+ 80x25 Jan 17 12:13:02.753834 kernel: printk: console [tty0] enabled Jan 17 12:13:02.753840 kernel: printk: console [ttyS0] enabled Jan 17 12:13:02.753847 kernel: ACPI: Core revision 20230628 Jan 17 12:13:02.753853 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 17 12:13:02.753859 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:13:02.753865 kernel: x2apic enabled Jan 17 12:13:02.753871 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:13:02.753877 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:13:02.753883 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 17 12:13:02.755917 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 17 12:13:02.755928 kernel: Disabled fast string operations Jan 17 12:13:02.755938 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:13:02.755944 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:13:02.755950 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:13:02.755956 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:13:02.755962 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:13:02.755968 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 17 12:13:02.755974 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:13:02.755980 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 17 12:13:02.755986 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 17 12:13:02.755994 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:13:02.756000 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:13:02.756006 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:13:02.756012 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 12:13:02.756018 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 12:13:02.756024 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:13:02.756030 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:13:02.756036 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:13:02.756042 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:13:02.756049 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:13:02.756055 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:13:02.756061 kernel: pid_max: default: 131072 minimum: 1024 Jan 17 12:13:02.756067 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:13:02.756073 kernel: landlock: Up and running. Jan 17 12:13:02.756079 kernel: SELinux: Initializing. Jan 17 12:13:02.756085 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:13:02.756091 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:13:02.756097 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 17 12:13:02.756105 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:13:02.756115 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:13:02.756125 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:13:02.756134 kernel: Performance Events: Skylake events, core PMU driver. Jan 17 12:13:02.756140 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 17 12:13:02.756146 kernel: core: CPUID marked event: 'instructions' unavailable Jan 17 12:13:02.756151 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 17 12:13:02.756157 kernel: core: CPUID marked event: 'cache references' unavailable Jan 17 12:13:02.756165 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 17 12:13:02.756171 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 17 12:13:02.756179 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 17 12:13:02.756188 kernel: ... version: 1 Jan 17 12:13:02.756199 kernel: ... bit width: 48 Jan 17 12:13:02.756207 kernel: ... generic registers: 4 Jan 17 12:13:02.756214 kernel: ... value mask: 0000ffffffffffff Jan 17 12:13:02.756219 kernel: ... max period: 000000007fffffff Jan 17 12:13:02.756225 kernel: ... fixed-purpose events: 0 Jan 17 12:13:02.756233 kernel: ... event mask: 000000000000000f Jan 17 12:13:02.756239 kernel: signal: max sigframe size: 1776 Jan 17 12:13:02.756245 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:13:02.756252 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:13:02.756258 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:13:02.756263 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:13:02.756269 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:13:02.756275 kernel: .... node #0, CPUs: #1 Jan 17 12:13:02.756281 kernel: Disabled fast string operations Jan 17 12:13:02.756287 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 17 12:13:02.756294 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 12:13:02.756300 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:13:02.756306 kernel: smpboot: Max logical packages: 128 Jan 17 12:13:02.756312 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 17 12:13:02.756317 kernel: devtmpfs: initialized Jan 17 12:13:02.756323 kernel: x86/mm: Memory block size: 128MB Jan 17 12:13:02.756329 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 17 12:13:02.756335 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:13:02.756341 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 17 12:13:02.756348 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:13:02.756354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:13:02.756360 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:13:02.756366 kernel: audit: type=2000 audit(1737115981.069:1): state=initialized audit_enabled=0 res=1 Jan 17 12:13:02.756372 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:13:02.756378 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:13:02.756384 kernel: cpuidle: using governor menu Jan 17 12:13:02.756390 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 17 12:13:02.756396 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:13:02.756404 kernel: dca service started, version 1.12.1 Jan 17 12:13:02.756410 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 17 12:13:02.756416 kernel: PCI: Using configuration type 1 for base access Jan 17 12:13:02.756422 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:13:02.756428 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:13:02.756434 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:13:02.756440 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:13:02.756445 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:13:02.756452 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:13:02.756460 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:13:02.756469 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:13:02.756478 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:13:02.756487 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:13:02.756493 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 17 12:13:02.756499 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:13:02.756505 kernel: ACPI: Interpreter enabled Jan 17 12:13:02.756511 kernel: ACPI: PM: (supports S0 S1 S5) Jan 17 12:13:02.756517 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:13:02.756525 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:13:02.756531 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:13:02.756537 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 17 12:13:02.756545 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 17 12:13:02.756644 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:13:02.756701 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 17 12:13:02.756751 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 17 12:13:02.756762 kernel: PCI host bridge to bus 0000:00 Jan 17 12:13:02.756814 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:13:02.756859 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 17 12:13:02.760097 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:13:02.760162 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:13:02.760207 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 17 12:13:02.760252 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 17 12:13:02.760317 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 17 12:13:02.760375 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 17 12:13:02.760430 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 17 12:13:02.760484 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 17 12:13:02.760534 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 17 12:13:02.760582 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:13:02.760634 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:13:02.760682 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:13:02.760744 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:13:02.760799 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 17 12:13:02.760863 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 17 12:13:02.760950 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 17 12:13:02.761017 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 17 12:13:02.761072 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 17 12:13:02.761133 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 17 12:13:02.761189 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 17 12:13:02.761246 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 17 12:13:02.761294 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 17 12:13:02.761343 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 17 12:13:02.761395 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 17 12:13:02.761443 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:13:02.761809 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 17 12:13:02.761865 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.761937 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.761996 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762072 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762133 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762183 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762236 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762285 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762348 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762398 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762453 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762503 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762555 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762604 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762657 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762706 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762761 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762810 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.762863 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.762926 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.763937 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764036 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764112 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764163 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764215 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764264 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764317 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764366 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764422 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764481 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764536 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764585 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764647 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764710 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.764768 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.764825 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.765951 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766006 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766061 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766112 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766170 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766250 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766318 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766382 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766452 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766503 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766560 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766610 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766663 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766712 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766774 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766826 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.766882 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.766948 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767001 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767054 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767108 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767157 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767217 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767283 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767354 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767407 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767460 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:13:02.767510 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.767561 kernel: pci_bus 0000:01: extended config space not accessible Jan 17 12:13:02.767614 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:13:02.767664 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 12:13:02.767673 kernel: acpiphp: Slot [32] registered Jan 17 12:13:02.767680 kernel: acpiphp: Slot [33] registered Jan 17 12:13:02.767686 kernel: acpiphp: Slot [34] registered Jan 17 12:13:02.767692 kernel: acpiphp: Slot [35] registered Jan 17 12:13:02.767698 kernel: acpiphp: Slot [36] registered Jan 17 12:13:02.767704 kernel: acpiphp: Slot [37] registered Jan 17 12:13:02.767712 kernel: acpiphp: Slot [38] registered Jan 17 12:13:02.767718 kernel: acpiphp: Slot [39] registered Jan 17 12:13:02.767724 kernel: acpiphp: Slot [40] registered Jan 17 12:13:02.767730 kernel: acpiphp: Slot [41] registered Jan 17 12:13:02.767736 kernel: acpiphp: Slot [42] registered Jan 17 12:13:02.767741 kernel: acpiphp: Slot [43] registered Jan 17 12:13:02.767747 kernel: acpiphp: Slot [44] registered Jan 17 12:13:02.767753 kernel: acpiphp: Slot [45] registered Jan 17 12:13:02.767759 kernel: acpiphp: Slot [46] registered Jan 17 12:13:02.767765 kernel: acpiphp: Slot [47] registered Jan 17 12:13:02.767772 kernel: acpiphp: Slot [48] registered Jan 17 12:13:02.767778 kernel: acpiphp: Slot [49] registered Jan 17 12:13:02.767784 kernel: acpiphp: Slot [50] registered Jan 17 12:13:02.767789 kernel: acpiphp: Slot [51] registered Jan 17 12:13:02.767795 kernel: acpiphp: Slot [52] registered Jan 17 12:13:02.767801 kernel: acpiphp: Slot [53] registered Jan 17 12:13:02.767807 kernel: acpiphp: Slot [54] registered Jan 17 12:13:02.767813 kernel: acpiphp: Slot [55] registered Jan 17 12:13:02.767818 kernel: acpiphp: Slot [56] registered Jan 17 12:13:02.767826 kernel: acpiphp: Slot [57] registered Jan 17 12:13:02.767832 kernel: acpiphp: Slot [58] registered Jan 17 12:13:02.767838 kernel: acpiphp: Slot [59] registered Jan 17 12:13:02.767843 kernel: acpiphp: Slot [60] registered Jan 17 12:13:02.767849 kernel: acpiphp: Slot [61] registered Jan 17 12:13:02.767855 kernel: acpiphp: Slot [62] registered Jan 17 12:13:02.767861 kernel: acpiphp: Slot [63] registered Jan 17 12:13:02.769931 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 17 12:13:02.769986 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 17 12:13:02.770039 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 17 12:13:02.770088 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:13:02.770136 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 17 12:13:02.770184 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 17 12:13:02.770232 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 17 12:13:02.770280 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 17 12:13:02.770339 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 17 12:13:02.770407 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 17 12:13:02.770470 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 17 12:13:02.770536 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 17 12:13:02.770587 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 17 12:13:02.770637 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 17 12:13:02.770687 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 17 12:13:02.770738 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 17 12:13:02.770786 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 17 12:13:02.770839 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 17 12:13:02.770897 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 17 12:13:02.770948 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 17 12:13:02.770997 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 17 12:13:02.771050 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:13:02.771101 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 17 12:13:02.771149 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 17 12:13:02.771215 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 17 12:13:02.771268 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:13:02.771318 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 17 12:13:02.771367 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 17 12:13:02.771414 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:13:02.771474 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 17 12:13:02.771526 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 17 12:13:02.771588 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:13:02.771642 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 17 12:13:02.771691 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 17 12:13:02.771739 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:13:02.771787 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 17 12:13:02.771835 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 17 12:13:02.773333 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:13:02.773395 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 17 12:13:02.773446 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 17 12:13:02.773494 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:13:02.773552 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 17 12:13:02.773602 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 17 12:13:02.773651 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 17 12:13:02.773717 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 17 12:13:02.773769 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 17 12:13:02.773818 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 17 12:13:02.773869 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 17 12:13:02.773932 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:13:02.773983 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 17 12:13:02.774043 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 17 12:13:02.774094 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 17 12:13:02.774148 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 17 12:13:02.774199 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 17 12:13:02.774249 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 17 12:13:02.774298 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 17 12:13:02.774348 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:13:02.774397 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 17 12:13:02.774447 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 17 12:13:02.774496 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 17 12:13:02.774547 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:13:02.774597 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 17 12:13:02.774645 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 17 12:13:02.774694 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:13:02.774743 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 17 12:13:02.775158 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 17 12:13:02.775221 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:13:02.775277 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 17 12:13:02.775327 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 17 12:13:02.775375 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:13:02.775425 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 17 12:13:02.775474 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 17 12:13:02.775523 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:13:02.775573 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 17 12:13:02.775623 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 17 12:13:02.775674 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:13:02.775724 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 17 12:13:02.775773 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 17 12:13:02.775822 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 17 12:13:02.775883 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:13:02.775944 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 17 12:13:02.775993 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 17 12:13:02.776042 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 17 12:13:02.776093 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:13:02.776143 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 17 12:13:02.776191 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 17 12:13:02.776240 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 17 12:13:02.776300 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:13:02.776351 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 17 12:13:02.776399 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 17 12:13:02.776448 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:13:02.776510 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 17 12:13:02.776560 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 17 12:13:02.776609 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:13:02.776659 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 17 12:13:02.776708 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 17 12:13:02.776756 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:13:02.776846 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 17 12:13:02.779398 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 17 12:13:02.779465 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:13:02.779521 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 17 12:13:02.779572 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 17 12:13:02.779622 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:13:02.779673 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 17 12:13:02.779722 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 17 12:13:02.779770 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 17 12:13:02.779820 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:13:02.780207 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 17 12:13:02.780293 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 17 12:13:02.780363 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 17 12:13:02.780413 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:13:02.780467 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 17 12:13:02.780526 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 17 12:13:02.780575 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:13:02.780628 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 17 12:13:02.780678 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 17 12:13:02.780726 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:13:02.780777 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 17 12:13:02.780826 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 17 12:13:02.780874 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:13:02.780937 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 17 12:13:02.780988 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 17 12:13:02.781045 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:13:02.781096 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 17 12:13:02.781156 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 17 12:13:02.781208 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:13:02.781259 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 17 12:13:02.781307 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 17 12:13:02.781355 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:13:02.781364 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 17 12:13:02.781373 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 17 12:13:02.781379 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 17 12:13:02.781385 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:13:02.781391 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 17 12:13:02.781397 kernel: iommu: Default domain type: Translated Jan 17 12:13:02.781403 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:13:02.781409 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:13:02.781415 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:13:02.781421 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 17 12:13:02.781429 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 17 12:13:02.781478 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 17 12:13:02.781527 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 17 12:13:02.781575 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:13:02.781584 kernel: vgaarb: loaded Jan 17 12:13:02.781591 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 17 12:13:02.781597 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 17 12:13:02.781603 kernel: clocksource: Switched to clocksource tsc-early Jan 17 12:13:02.781609 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:13:02.781617 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:13:02.781623 kernel: pnp: PnP ACPI init Jan 17 12:13:02.781674 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 17 12:13:02.781720 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 17 12:13:02.781765 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 17 12:13:02.781812 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 17 12:13:02.781861 kernel: pnp 00:06: [dma 2] Jan 17 12:13:02.781928 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 17 12:13:02.781976 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 17 12:13:02.782020 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 17 12:13:02.782028 kernel: pnp: PnP ACPI: found 8 devices Jan 17 12:13:02.782034 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:13:02.782040 kernel: NET: Registered PF_INET protocol family Jan 17 12:13:02.782047 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:13:02.782053 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:13:02.782062 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:13:02.782068 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:13:02.782074 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:13:02.782080 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:13:02.782086 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:13:02.782092 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:13:02.782098 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:13:02.782104 kernel: NET: Registered PF_XDP protocol family Jan 17 12:13:02.782154 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 17 12:13:02.782208 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 12:13:02.782259 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 12:13:02.782310 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 12:13:02.782361 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 12:13:02.782412 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 17 12:13:02.782464 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 17 12:13:02.782515 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 17 12:13:02.782564 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 17 12:13:02.782614 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 17 12:13:02.782663 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 17 12:13:02.782712 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 17 12:13:02.782764 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 17 12:13:02.782813 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 17 12:13:02.782862 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 17 12:13:02.783226 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 17 12:13:02.783281 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 17 12:13:02.783332 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 17 12:13:02.783385 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 17 12:13:02.783434 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 17 12:13:02.783482 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 17 12:13:02.783531 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 17 12:13:02.783580 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 17 12:13:02.783629 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:13:02.783680 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:13:02.783729 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.783779 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.783828 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.783876 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.783934 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.783983 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784036 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784086 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784134 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784182 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784245 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784297 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784345 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784399 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784453 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784505 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.784554 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.784615 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.786932 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.786993 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787051 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787102 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787152 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787205 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787254 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787317 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787369 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787417 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787468 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787516 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787565 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787616 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787666 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787714 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787762 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787810 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787858 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.787922 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.787974 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788041 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788092 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788141 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788189 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788238 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788287 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788335 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788400 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788451 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788503 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788551 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788600 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788648 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788696 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788744 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.788793 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.788842 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789216 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789280 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789333 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789382 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789434 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789510 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789563 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789611 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789661 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789710 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789760 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789813 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.789863 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.789979 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790030 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790078 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790127 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790175 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790224 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790272 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790324 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790372 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790422 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790469 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790518 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790579 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790632 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 17 12:13:02.790682 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:13:02.790731 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:13:02.790781 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 17 12:13:02.790832 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 17 12:13:02.790880 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 17 12:13:02.790983 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:13:02.791041 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 17 12:13:02.791092 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 17 12:13:02.791139 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 17 12:13:02.791187 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 17 12:13:02.791249 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:13:02.791303 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 17 12:13:02.791352 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 17 12:13:02.791400 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 17 12:13:02.791448 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:13:02.791507 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 17 12:13:02.791557 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 17 12:13:02.791605 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 17 12:13:02.791666 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:13:02.791716 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 17 12:13:02.791767 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 17 12:13:02.791816 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:13:02.791865 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 17 12:13:02.791921 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 17 12:13:02.791969 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:13:02.792022 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 17 12:13:02.792073 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 17 12:13:02.792121 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:13:02.792170 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 17 12:13:02.792219 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 17 12:13:02.792267 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:13:02.792316 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 17 12:13:02.792364 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 17 12:13:02.792413 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:13:02.792465 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 17 12:13:02.792517 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 17 12:13:02.792565 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 17 12:13:02.792614 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 17 12:13:02.792663 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:13:02.792726 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 17 12:13:02.792778 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 17 12:13:02.792827 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 17 12:13:02.792876 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:13:02.794956 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 17 12:13:02.795015 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 17 12:13:02.795070 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 17 12:13:02.795121 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:13:02.795181 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 17 12:13:02.795245 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 17 12:13:02.795300 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:13:02.795350 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 17 12:13:02.795399 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 17 12:13:02.795447 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:13:02.795496 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 17 12:13:02.795547 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 17 12:13:02.795596 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:13:02.795644 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 17 12:13:02.795693 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 17 12:13:02.795741 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:13:02.795790 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 17 12:13:02.795839 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 17 12:13:02.795971 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:13:02.796030 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 17 12:13:02.796079 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 17 12:13:02.796130 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 17 12:13:02.796179 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:13:02.796229 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 17 12:13:02.796278 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 17 12:13:02.796326 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 17 12:13:02.796374 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:13:02.796425 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 17 12:13:02.796474 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 17 12:13:02.796522 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 17 12:13:02.796588 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:13:02.796639 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 17 12:13:02.796687 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 17 12:13:02.796737 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:13:02.796787 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 17 12:13:02.796835 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 17 12:13:02.796883 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:13:02.796943 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 17 12:13:02.797007 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 17 12:13:02.797064 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:13:02.797117 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 17 12:13:02.797167 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 17 12:13:02.797216 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:13:02.797265 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 17 12:13:02.797313 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 17 12:13:02.797363 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:13:02.797413 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 17 12:13:02.797462 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 17 12:13:02.797521 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 17 12:13:02.797574 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:13:02.797624 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 17 12:13:02.797673 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 17 12:13:02.797721 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 17 12:13:02.797769 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:13:02.797830 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 17 12:13:02.797882 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 17 12:13:02.797939 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:13:02.798262 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 17 12:13:02.798315 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 17 12:13:02.798369 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:13:02.798434 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 17 12:13:02.798720 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 17 12:13:02.798784 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:13:02.798839 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 17 12:13:02.798897 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 17 12:13:02.798956 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:13:02.799011 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 17 12:13:02.799060 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 17 12:13:02.799111 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:13:02.799162 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 17 12:13:02.799211 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 17 12:13:02.799260 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:13:02.799309 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 17 12:13:02.799353 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 17 12:13:02.799396 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 17 12:13:02.799439 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 17 12:13:02.799481 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 17 12:13:02.799531 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 17 12:13:02.799577 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 17 12:13:02.799622 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:13:02.799666 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 17 12:13:02.799710 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 17 12:13:02.799755 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 17 12:13:02.799799 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 17 12:13:02.799846 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 17 12:13:02.799926 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 17 12:13:02.799973 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 17 12:13:02.800018 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:13:02.800066 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 17 12:13:02.800112 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 17 12:13:02.800155 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:13:02.800206 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 17 12:13:02.800251 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 17 12:13:02.800294 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:13:02.800343 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 17 12:13:02.800387 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:13:02.800436 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 17 12:13:02.800483 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:13:02.800534 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 17 12:13:02.800580 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:13:02.800628 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 17 12:13:02.800673 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:13:02.800725 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 17 12:13:02.800780 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:13:02.800837 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 17 12:13:02.800883 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 17 12:13:02.801350 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:13:02.801405 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 17 12:13:02.801452 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 17 12:13:02.801501 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:13:02.801550 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 17 12:13:02.801595 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 17 12:13:02.801643 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:13:02.801692 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 17 12:13:02.801737 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:13:02.801787 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 17 12:13:02.801836 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:13:02.801885 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 17 12:13:02.801945 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:13:02.801994 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 17 12:13:02.802039 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:13:02.802088 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 17 12:13:02.802136 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:13:02.802184 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 17 12:13:02.802230 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 17 12:13:02.802275 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:13:02.802326 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 17 12:13:02.802372 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 17 12:13:02.802417 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:13:02.802468 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 17 12:13:02.802513 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 17 12:13:02.802558 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:13:02.802606 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 17 12:13:02.802651 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:13:02.802699 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 17 12:13:02.802976 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:13:02.803027 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 17 12:13:02.803073 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:13:02.803123 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 17 12:13:02.803168 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:13:02.803216 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 17 12:13:02.803261 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:13:02.803315 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 17 12:13:02.803359 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 17 12:13:02.803403 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:13:02.803455 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 17 12:13:02.803500 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 17 12:13:02.803547 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:13:02.803597 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 17 12:13:02.803642 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:13:02.803692 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 17 12:13:02.803736 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:13:02.803785 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 17 12:13:02.803831 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:13:02.803883 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 17 12:13:02.803991 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:13:02.804046 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 17 12:13:02.804091 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:13:02.804159 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 17 12:13:02.804569 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:13:02.804632 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:13:02.804644 kernel: PCI: CLS 32 bytes, default 64 Jan 17 12:13:02.804651 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:13:02.804658 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 17 12:13:02.804664 kernel: clocksource: Switched to clocksource tsc Jan 17 12:13:02.804671 kernel: Initialise system trusted keyrings Jan 17 12:13:02.804677 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:13:02.804683 kernel: Key type asymmetric registered Jan 17 12:13:02.804691 kernel: Asymmetric key parser 'x509' registered Jan 17 12:13:02.804698 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:13:02.804704 kernel: io scheduler mq-deadline registered Jan 17 12:13:02.804710 kernel: io scheduler kyber registered Jan 17 12:13:02.804717 kernel: io scheduler bfq registered Jan 17 12:13:02.804771 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 17 12:13:02.804823 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.804876 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 17 12:13:02.804941 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.804997 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 17 12:13:02.805048 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805099 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 17 12:13:02.805363 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805422 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 17 12:13:02.805483 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805540 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 17 12:13:02.805591 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805645 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 17 12:13:02.805695 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805746 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 17 12:13:02.805799 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.805849 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 17 12:13:02.806337 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806412 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 17 12:13:02.806483 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806548 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 17 12:13:02.806602 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806665 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 17 12:13:02.806725 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806786 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 17 12:13:02.806839 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.806914 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 17 12:13:02.806978 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807051 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 17 12:13:02.807121 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807181 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 17 12:13:02.807233 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807284 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 17 12:13:02.807337 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807387 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 17 12:13:02.807453 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807512 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 17 12:13:02.807562 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807614 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 17 12:13:02.807664 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807718 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 17 12:13:02.807768 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.807819 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 17 12:13:02.807872 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808001 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 17 12:13:02.808067 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808118 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 17 12:13:02.808175 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808226 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 17 12:13:02.808276 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808326 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 17 12:13:02.808377 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808428 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 17 12:13:02.808489 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808546 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 17 12:13:02.808595 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808646 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 17 12:13:02.808699 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808749 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 17 12:13:02.808799 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808848 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 17 12:13:02.808904 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.808958 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 17 12:13:02.809008 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:13:02.809018 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:13:02.809025 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:13:02.809032 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:13:02.809039 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 17 12:13:02.809045 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:13:02.809053 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:13:02.809104 kernel: rtc_cmos 00:01: registered as rtc0 Jan 17 12:13:02.809152 kernel: rtc_cmos 00:01: setting system clock to 2025-01-17T12:13:02 UTC (1737115982) Jan 17 12:13:02.809167 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:13:02.809229 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 17 12:13:02.809239 kernel: intel_pstate: CPU model not supported Jan 17 12:13:02.809245 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:13:02.809252 kernel: Segment Routing with IPv6 Jan 17 12:13:02.809260 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:13:02.809267 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:13:02.809273 kernel: Key type dns_resolver registered Jan 17 12:13:02.809280 kernel: IPI shorthand broadcast: enabled Jan 17 12:13:02.809286 kernel: sched_clock: Marking stable (965004083, 239576276)->(1268239736, -63659377) Jan 17 12:13:02.809293 kernel: registered taskstats version 1 Jan 17 12:13:02.809300 kernel: Loading compiled-in X.509 certificates Jan 17 12:13:02.809306 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:13:02.809313 kernel: Key type .fscrypt registered Jan 17 12:13:02.809321 kernel: Key type fscrypt-provisioning registered Jan 17 12:13:02.809327 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:13:02.809333 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:13:02.809340 kernel: ima: No architecture policies found Jan 17 12:13:02.809346 kernel: clk: Disabling unused clocks Jan 17 12:13:02.809352 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:13:02.809359 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:13:02.809365 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:13:02.809371 kernel: Run /init as init process Jan 17 12:13:02.809379 kernel: with arguments: Jan 17 12:13:02.809386 kernel: /init Jan 17 12:13:02.809392 kernel: with environment: Jan 17 12:13:02.809398 kernel: HOME=/ Jan 17 12:13:02.809404 kernel: TERM=linux Jan 17 12:13:02.809410 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:13:02.809418 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:13:02.809426 systemd[1]: Detected virtualization vmware. Jan 17 12:13:02.809434 systemd[1]: Detected architecture x86-64. Jan 17 12:13:02.809440 systemd[1]: Running in initrd. Jan 17 12:13:02.809447 systemd[1]: No hostname configured, using default hostname. Jan 17 12:13:02.809453 systemd[1]: Hostname set to . Jan 17 12:13:02.809462 systemd[1]: Initializing machine ID from random generator. Jan 17 12:13:02.809472 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:13:02.809482 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:13:02.809489 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:13:02.809498 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:13:02.809505 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:13:02.809512 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:13:02.809519 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:13:02.809527 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:13:02.809533 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:13:02.809541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:13:02.809553 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:13:02.809565 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:13:02.809572 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:13:02.809579 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:13:02.809585 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:13:02.809592 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:13:02.809598 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:13:02.809605 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:13:02.809613 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:13:02.809620 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:13:02.809627 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:13:02.809633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:13:02.809639 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:13:02.809646 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:13:02.809653 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:13:02.809659 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:13:02.809667 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:13:02.809675 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:13:02.809682 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:13:02.809702 systemd-journald[214]: Collecting audit messages is disabled. Jan 17 12:13:02.809719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:02.809727 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:13:02.809734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:13:02.809740 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:13:02.809748 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:13:02.809756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:13:02.809763 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:13:02.809770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:13:02.809777 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:02.809783 kernel: Bridge firewalling registered Jan 17 12:13:02.809790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:13:02.809797 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:13:02.809804 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:13:02.809811 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:13:02.809819 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:02.809826 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:02.809833 systemd-journald[214]: Journal started Jan 17 12:13:02.809849 systemd-journald[214]: Runtime Journal (/run/log/journal/6f36f4e4e9e447c8b65fd678048ad031) is 4.8M, max 38.6M, 33.8M free. Jan 17 12:13:02.765044 systemd-modules-load[215]: Inserted module 'overlay' Jan 17 12:13:02.786523 systemd-modules-load[215]: Inserted module 'br_netfilter' Jan 17 12:13:02.815024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:13:02.815048 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:13:02.817362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:13:02.818958 dracut-cmdline[235]: dracut-dracut-053 Jan 17 12:13:02.820219 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:13:02.825374 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:13:02.829986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:13:02.849750 systemd-resolved[271]: Positive Trust Anchors: Jan 17 12:13:02.849762 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:13:02.849786 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:13:02.851683 systemd-resolved[271]: Defaulting to hostname 'linux'. Jan 17 12:13:02.852551 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:13:02.852694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:13:02.873901 kernel: SCSI subsystem initialized Jan 17 12:13:02.879899 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:13:02.887908 kernel: iscsi: registered transport (tcp) Jan 17 12:13:02.901246 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:13:02.901296 kernel: QLogic iSCSI HBA Driver Jan 17 12:13:02.921392 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:13:02.929015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:13:02.944788 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:13:02.944838 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:13:02.944847 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:13:02.975917 kernel: raid6: avx2x4 gen() 51168 MB/s Jan 17 12:13:02.992931 kernel: raid6: avx2x2 gen() 51760 MB/s Jan 17 12:13:03.010164 kernel: raid6: avx2x1 gen() 40876 MB/s Jan 17 12:13:03.010222 kernel: raid6: using algorithm avx2x2 gen() 51760 MB/s Jan 17 12:13:03.028160 kernel: raid6: .... xor() 31149 MB/s, rmw enabled Jan 17 12:13:03.028217 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:13:03.041912 kernel: xor: automatically using best checksumming function avx Jan 17 12:13:03.141911 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:13:03.147628 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:13:03.152015 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:13:03.160088 systemd-udevd[431]: Using default interface naming scheme 'v255'. Jan 17 12:13:03.162630 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:13:03.170065 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:13:03.177032 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Jan 17 12:13:03.194268 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:13:03.199065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:13:03.269935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:13:03.274032 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:13:03.281630 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:13:03.282287 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:13:03.282763 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:13:03.284120 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:13:03.289018 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:13:03.298416 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:13:03.341909 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 17 12:13:03.344760 kernel: vmw_pvscsi: using 64bit dma Jan 17 12:13:03.344790 kernel: vmw_pvscsi: max_id: 16 Jan 17 12:13:03.344798 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 17 12:13:03.350926 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 17 12:13:03.355220 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 17 12:13:03.355255 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 17 12:13:03.355264 kernel: vmw_pvscsi: using MSI-X Jan 17 12:13:03.356512 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 17 12:13:03.357491 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 17 12:13:03.360570 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 17 12:13:03.360591 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 17 12:13:03.379281 kernel: libata version 3.00 loaded. Jan 17 12:13:03.379297 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 17 12:13:03.379385 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:13:03.379394 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 17 12:13:03.388867 kernel: scsi host1: ata_piix Jan 17 12:13:03.388964 kernel: scsi host2: ata_piix Jan 17 12:13:03.389025 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 17 12:13:03.389034 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 17 12:13:03.389041 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:13:03.389049 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 17 12:13:03.389121 kernel: AES CTR mode by8 optimization enabled Jan 17 12:13:03.391829 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:13:03.391916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:03.392418 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:13:03.392525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:13:03.392619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:03.392755 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:03.396063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:03.410899 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:03.413997 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:13:03.425401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:03.552906 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 17 12:13:03.558899 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 17 12:13:03.571324 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 17 12:13:03.577849 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:13:03.577932 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 17 12:13:03.578000 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 17 12:13:03.578099 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 17 12:13:03.578159 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:13:03.578168 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:13:03.578226 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 17 12:13:03.598922 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:13:03.598936 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:13:03.777906 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (481) Jan 17 12:13:03.784435 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 17 12:13:03.790119 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 17 12:13:03.799109 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 17 12:13:03.833903 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (489) Jan 17 12:13:03.839255 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 17 12:13:03.839423 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 17 12:13:03.848990 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:13:04.376905 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:13:04.424925 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:13:05.447903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:13:05.448750 disk-uuid[593]: The operation has completed successfully. Jan 17 12:13:05.479174 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:13:05.479233 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:13:05.486047 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:13:05.489760 sh[610]: Success Jan 17 12:13:05.497923 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:13:05.533875 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:13:05.542185 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:13:05.544112 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:13:05.560909 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:13:05.560944 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:05.560952 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:13:05.560960 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:13:05.562329 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:13:05.568904 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:13:05.569513 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:13:05.579025 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 17 12:13:05.580343 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:13:05.596907 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:05.596943 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:05.596951 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:13:05.601900 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:13:05.608577 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:13:05.610900 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:05.618143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:13:05.622326 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:13:05.642789 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 17 12:13:05.651100 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:13:05.698462 ignition[672]: Ignition 2.19.0 Jan 17 12:13:05.698469 ignition[672]: Stage: fetch-offline Jan 17 12:13:05.698492 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:05.698498 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:05.698568 ignition[672]: parsed url from cmdline: "" Jan 17 12:13:05.698569 ignition[672]: no config URL provided Jan 17 12:13:05.698573 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:13:05.698578 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:13:05.698941 ignition[672]: config successfully fetched Jan 17 12:13:05.698959 ignition[672]: parsing config with SHA512: fb9cf2678ec862f5f690a519694ba27fd6941f98e785d0bf41cf610e27dfa6053d1754d2cbb0f36b76d9926dc6edd41fe94fdd6cba5ff460f7195db7593ff065 Jan 17 12:13:05.702793 unknown[672]: fetched base config from "system" Jan 17 12:13:05.702944 unknown[672]: fetched user config from "vmware" Jan 17 12:13:05.703377 ignition[672]: fetch-offline: fetch-offline passed Jan 17 12:13:05.703554 ignition[672]: Ignition finished successfully Jan 17 12:13:05.704251 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:13:05.720234 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:13:05.725985 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:13:05.738873 systemd-networkd[806]: lo: Link UP Jan 17 12:13:05.738880 systemd-networkd[806]: lo: Gained carrier Jan 17 12:13:05.739737 systemd-networkd[806]: Enumeration completed Jan 17 12:13:05.739868 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:13:05.740083 systemd-networkd[806]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 17 12:13:05.740132 systemd[1]: Reached target network.target - Network. Jan 17 12:13:05.740358 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:13:05.743408 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 17 12:13:05.743546 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 17 12:13:05.743777 systemd-networkd[806]: ens192: Link UP Jan 17 12:13:05.743782 systemd-networkd[806]: ens192: Gained carrier Jan 17 12:13:05.745743 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:13:05.753037 ignition[808]: Ignition 2.19.0 Jan 17 12:13:05.753044 ignition[808]: Stage: kargs Jan 17 12:13:05.753185 ignition[808]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:05.753192 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:05.753780 ignition[808]: kargs: kargs passed Jan 17 12:13:05.753806 ignition[808]: Ignition finished successfully Jan 17 12:13:05.754883 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:13:05.759004 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:13:05.765658 ignition[815]: Ignition 2.19.0 Jan 17 12:13:05.765668 ignition[815]: Stage: disks Jan 17 12:13:05.765860 ignition[815]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:05.765867 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:05.767178 ignition[815]: disks: disks passed Jan 17 12:13:05.767206 ignition[815]: Ignition finished successfully Jan 17 12:13:05.767936 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:13:05.768294 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:13:05.768525 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:13:05.768738 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:13:05.768946 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:13:05.769153 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:13:05.771967 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:13:05.784639 systemd-fsck[824]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:13:05.786616 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:13:05.795985 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:13:05.834189 systemd-resolved[271]: Detected conflict on linux IN A 139.178.70.104 Jan 17 12:13:05.834199 systemd-resolved[271]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Jan 17 12:13:05.861906 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:13:05.862201 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:13:05.862654 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:13:05.873991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:13:05.875288 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:13:05.875585 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:13:05.875614 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:13:05.875628 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:13:05.879825 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:13:05.881597 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:13:05.881978 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (832) Jan 17 12:13:05.886018 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:05.886042 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:05.886051 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:13:05.889941 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:13:05.890646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:13:05.909195 initrd-setup-root[856]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:13:05.912325 initrd-setup-root[863]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:13:05.914759 initrd-setup-root[870]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:13:05.916935 initrd-setup-root[877]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:13:05.969552 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:13:05.975987 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:13:05.978409 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:13:05.981896 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:05.994136 ignition[945]: INFO : Ignition 2.19.0 Jan 17 12:13:05.994136 ignition[945]: INFO : Stage: mount Jan 17 12:13:05.994136 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:05.994136 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:05.995387 ignition[945]: INFO : mount: mount passed Jan 17 12:13:05.995387 ignition[945]: INFO : Ignition finished successfully Jan 17 12:13:05.996077 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:13:06.000023 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:13:06.000408 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:13:06.558108 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:13:06.563077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:13:06.569905 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (956) Jan 17 12:13:06.573106 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:06.573159 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:06.573167 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:13:06.578910 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:13:06.580190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:13:06.595510 ignition[973]: INFO : Ignition 2.19.0 Jan 17 12:13:06.595786 ignition[973]: INFO : Stage: files Jan 17 12:13:06.595994 ignition[973]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:06.596119 ignition[973]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:06.596855 ignition[973]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:13:06.605784 ignition[973]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:13:06.605948 ignition[973]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:13:06.647244 ignition[973]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:13:06.647592 ignition[973]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:13:06.648036 unknown[973]: wrote ssh authorized keys file for user: core Jan 17 12:13:06.648311 ignition[973]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:13:06.675639 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:13:06.675639 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:13:06.675639 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:13:06.675639 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:13:06.724876 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:06.818982 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:13:07.150085 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:13:07.335316 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:07.335316 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 17 12:13:07.335316 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 17 12:13:07.335316 ignition[973]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 17 12:13:07.340250 ignition[973]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:13:07.341807 ignition[973]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:13:07.341807 ignition[973]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 17 12:13:07.341807 ignition[973]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:13:07.433980 systemd-networkd[806]: ens192: Gained IPv6LL Jan 17 12:13:07.986383 ignition[973]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:13:07.988897 ignition[973]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:13:07.988897 ignition[973]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:13:07.988897 ignition[973]: INFO : files: files passed Jan 17 12:13:07.988897 ignition[973]: INFO : Ignition finished successfully Jan 17 12:13:07.990694 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:13:07.994990 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:13:07.996233 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:13:08.008492 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:13:08.008492 initrd-setup-root-after-ignition[1002]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:13:08.009589 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:13:08.010384 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:13:08.010857 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:13:08.014002 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:13:08.014479 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:13:08.014528 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:13:08.029013 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:13:08.029078 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:13:08.029478 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:13:08.029598 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:13:08.029798 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:13:08.030242 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:13:08.040118 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:13:08.044990 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:13:08.050446 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:13:08.050616 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:13:08.050838 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:13:08.051221 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:13:08.051289 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:13:08.051558 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:13:08.051805 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:13:08.051992 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:13:08.052178 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:13:08.052384 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:13:08.052597 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:13:08.052788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:13:08.053005 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:13:08.053227 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:13:08.053416 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:13:08.053589 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:13:08.053648 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:13:08.053930 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:13:08.054156 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:13:08.054352 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:13:08.054395 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:13:08.054563 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:13:08.054624 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:13:08.054882 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:13:08.054951 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:13:08.055196 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:13:08.055339 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:13:08.058911 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:13:08.059080 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:13:08.059307 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:13:08.059482 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:13:08.059531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:13:08.059676 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:13:08.059722 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:13:08.059879 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:13:08.059956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:13:08.060202 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:13:08.060261 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:13:08.069130 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:13:08.069360 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:13:08.069606 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:13:08.072019 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:13:08.072262 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:13:08.072489 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:13:08.072826 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:13:08.073059 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:13:08.076011 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:13:08.076185 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:13:08.077279 ignition[1027]: INFO : Ignition 2.19.0 Jan 17 12:13:08.077279 ignition[1027]: INFO : Stage: umount Jan 17 12:13:08.077279 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:08.077279 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:13:08.077279 ignition[1027]: INFO : umount: umount passed Jan 17 12:13:08.077279 ignition[1027]: INFO : Ignition finished successfully Jan 17 12:13:08.082218 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:13:08.082270 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:13:08.082647 systemd[1]: Stopped target network.target - Network. Jan 17 12:13:08.083269 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:13:08.083302 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:13:08.083418 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:13:08.083440 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:13:08.083549 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:13:08.083570 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:13:08.083675 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:13:08.083695 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:13:08.083874 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:13:08.085171 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:13:08.092014 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:13:08.092095 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:13:08.092989 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:13:08.093019 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:13:08.093541 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:13:08.093596 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:13:08.093896 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:13:08.093926 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:13:08.100075 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:13:08.100288 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:13:08.100437 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:13:08.100699 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 17 12:13:08.100852 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 17 12:13:08.101098 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:13:08.101119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:08.101329 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:13:08.101350 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:13:08.101717 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:13:08.106847 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:13:08.106916 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:13:08.111183 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:13:08.111264 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:13:08.111715 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:13:08.111740 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:13:08.111953 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:13:08.111969 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:13:08.112125 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:13:08.112147 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:13:08.112427 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:13:08.112448 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:13:08.112730 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:13:08.112751 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:08.116982 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:13:08.117104 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:13:08.117135 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:13:08.117267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:13:08.117288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:08.117996 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:13:08.119883 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:13:08.119949 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:13:08.480057 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:13:08.480132 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:13:08.480449 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:13:08.480565 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:13:08.480591 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:13:08.484975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:13:08.510999 systemd[1]: Switching root. Jan 17 12:13:08.532066 systemd-journald[214]: Journal stopped Jan 17 12:13:10.789255 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jan 17 12:13:10.789283 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:13:10.789292 kernel: SELinux: policy capability open_perms=1 Jan 17 12:13:10.789298 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:13:10.789303 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:13:10.789308 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:13:10.789316 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:13:10.789322 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:13:10.789328 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:13:10.789333 kernel: audit: type=1403 audit(1737115989.421:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:13:10.789340 systemd[1]: Successfully loaded SELinux policy in 32.465ms. Jan 17 12:13:10.789347 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.786ms. Jan 17 12:13:10.789354 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:13:10.789366 systemd[1]: Detected virtualization vmware. Jan 17 12:13:10.789375 systemd[1]: Detected architecture x86-64. Jan 17 12:13:10.789382 systemd[1]: Detected first boot. Jan 17 12:13:10.789389 systemd[1]: Initializing machine ID from random generator. Jan 17 12:13:10.789403 zram_generator::config[1089]: No configuration found. Jan 17 12:13:10.789415 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:13:10.789423 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:13:10.789430 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jan 17 12:13:10.789436 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:13:10.789443 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:13:10.789449 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:13:10.789457 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:13:10.789465 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:13:10.789472 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:13:10.789479 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:13:10.789485 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:13:10.789495 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:13:10.789504 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:13:10.789513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:13:10.789520 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:13:10.789526 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:13:10.789534 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:13:10.789541 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:13:10.789547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:13:10.789554 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:13:10.789561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:13:10.789569 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:13:10.789576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:13:10.789586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:13:10.789593 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:13:10.789600 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:13:10.789607 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:13:10.789615 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:13:10.789622 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:13:10.789631 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:13:10.789639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:13:10.789646 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:13:10.789653 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:13:10.789660 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:13:10.789668 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:13:10.789675 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:13:10.789682 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:13:10.789690 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:10.789698 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:13:10.789705 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:13:10.789712 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:13:10.789719 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:13:10.789727 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jan 17 12:13:10.789734 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:13:10.789741 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:13:10.789748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:13:10.789755 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:13:10.789763 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:13:10.789770 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:13:10.789777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:13:10.789784 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:13:10.789792 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:13:10.789801 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:13:10.789809 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:13:10.789816 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:13:10.789823 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:13:10.789832 kernel: fuse: init (API version 7.39) Jan 17 12:13:10.789843 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:13:10.789871 systemd-journald[1202]: Collecting audit messages is disabled. Jan 17 12:13:10.789899 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:13:10.789908 systemd-journald[1202]: Journal started Jan 17 12:13:10.789925 systemd-journald[1202]: Runtime Journal (/run/log/journal/1a408dd1194148bf82979b3c9dc1b0cd) is 4.8M, max 38.6M, 33.8M free. Jan 17 12:13:10.790314 jq[1167]: true Jan 17 12:13:10.795477 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:10.795515 kernel: loop: module loaded Jan 17 12:13:10.800245 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:13:10.800560 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:13:10.800974 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:13:10.801225 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:13:10.801532 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:13:10.801680 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:13:10.802969 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:13:10.803231 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:13:10.811075 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:13:10.811360 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:13:10.811445 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:13:10.811675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:13:10.811751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:13:10.811983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:13:10.812058 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:13:10.812283 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:13:10.812356 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:13:10.812573 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:13:10.812647 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:13:10.814145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:13:10.817450 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:13:10.817815 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:13:10.822750 jq[1220]: true Jan 17 12:13:10.843071 kernel: ACPI: bus type drm_connector registered Jan 17 12:13:10.838252 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:13:10.841451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:13:10.847168 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:13:10.854026 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:13:10.858964 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:13:10.859121 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:13:10.866264 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:13:10.883032 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:13:10.883202 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:13:10.889300 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:13:10.889944 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:13:10.900169 systemd-journald[1202]: Time spent on flushing to /var/log/journal/1a408dd1194148bf82979b3c9dc1b0cd is 44.067ms for 1819 entries. Jan 17 12:13:10.900169 systemd-journald[1202]: System Journal (/var/log/journal/1a408dd1194148bf82979b3c9dc1b0cd) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:13:10.958977 systemd-journald[1202]: Received client request to flush runtime journal. Jan 17 12:13:10.905777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:13:10.909098 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:13:10.910897 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:13:10.911932 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:13:10.946814 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:13:10.947033 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:13:10.960177 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:13:10.969497 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:10.973418 ignition[1229]: Ignition 2.19.0 Jan 17 12:13:10.975010 ignition[1229]: deleting config from guestinfo properties Jan 17 12:13:10.975255 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 17 12:13:10.975270 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 17 12:13:10.978899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:13:10.985128 ignition[1229]: Successfully deleted config Jan 17 12:13:10.988077 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:13:10.990133 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jan 17 12:13:10.990470 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:13:10.999044 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:13:11.002548 udevadm[1273]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:13:11.034454 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:13:11.044047 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:13:11.053561 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 17 12:13:11.053573 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 17 12:13:11.056571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:13:11.450047 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:13:11.458038 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:13:11.474291 systemd-udevd[1287]: Using default interface naming scheme 'v255'. Jan 17 12:13:11.505322 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:13:11.513044 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:13:11.526054 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:13:11.544247 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:13:11.555125 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:13:11.614655 systemd-networkd[1293]: lo: Link UP Jan 17 12:13:11.614661 systemd-networkd[1293]: lo: Gained carrier Jan 17 12:13:11.615525 systemd-networkd[1293]: Enumeration completed Jan 17 12:13:11.615619 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:13:11.615773 systemd-networkd[1293]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jan 17 12:13:11.618578 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 17 12:13:11.618735 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 17 12:13:11.620416 systemd-networkd[1293]: ens192: Link UP Jan 17 12:13:11.620513 systemd-networkd[1293]: ens192: Gained carrier Jan 17 12:13:11.621678 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:13:11.636922 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:13:11.643924 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1288) Jan 17 12:13:11.668920 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:13:11.695931 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jan 17 12:13:11.723903 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jan 17 12:13:11.729092 kernel: Guest personality initialized and is active Jan 17 12:13:11.731088 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 17 12:13:11.731152 kernel: Initialized host personality Jan 17 12:13:11.739919 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:13:11.744065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 17 12:13:11.758739 (udev-worker)[1298]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jan 17 12:13:11.763929 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:13:11.778107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:11.780442 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:13:11.791260 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:13:11.801595 lvm[1328]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:13:11.833927 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:13:11.834150 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:13:11.840098 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:13:11.851197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:11.853717 lvm[1334]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:13:11.879068 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:13:11.879792 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:13:11.880124 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:13:11.880194 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:13:11.880330 systemd[1]: Reached target machines.target - Containers. Jan 17 12:13:11.881216 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:13:11.885130 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:13:11.886247 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:13:11.886423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:13:11.889008 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:13:11.891322 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:13:11.893740 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:13:11.894329 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:13:11.912973 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:13:11.925906 kernel: loop0: detected capacity change from 0 to 2976 Jan 17 12:13:11.943990 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:13:11.944959 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:13:11.973915 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:13:12.005950 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:13:12.049012 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:13:12.180030 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:13:12.219913 kernel: loop4: detected capacity change from 0 to 2976 Jan 17 12:13:12.230912 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 12:13:12.246915 kernel: loop6: detected capacity change from 0 to 211296 Jan 17 12:13:12.352902 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 12:13:12.519546 (sd-merge)[1357]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jan 17 12:13:12.519914 (sd-merge)[1357]: Merged extensions into '/usr'. Jan 17 12:13:12.529286 systemd[1]: Reloading requested from client PID 1343 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:13:12.529300 systemd[1]: Reloading... Jan 17 12:13:12.566929 zram_generator::config[1388]: No configuration found. Jan 17 12:13:12.630915 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:13:12.646055 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:13:12.680470 systemd[1]: Reloading finished in 150 ms. Jan 17 12:13:12.693874 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:13:12.699039 systemd[1]: Starting ensure-sysext.service... Jan 17 12:13:12.701133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:13:12.703645 systemd[1]: Reloading requested from client PID 1446 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:13:12.703653 systemd[1]: Reloading... Jan 17 12:13:12.717129 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:13:12.717556 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:13:12.718139 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:13:12.718348 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Jan 17 12:13:12.718421 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Jan 17 12:13:12.734756 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:13:12.734850 systemd-tmpfiles[1447]: Skipping /boot Jan 17 12:13:12.738225 zram_generator::config[1472]: No configuration found. Jan 17 12:13:12.739833 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:13:12.739923 systemd-tmpfiles[1447]: Skipping /boot Jan 17 12:13:12.810804 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:13:12.825583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:13:12.860417 systemd[1]: Reloading finished in 156 ms. Jan 17 12:13:12.879421 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:13:12.883140 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:13:12.885787 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:13:12.891074 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:13:12.894098 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:13:12.896820 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:13:12.908451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:12.915776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:13:12.926302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:13:12.932267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:13:12.932514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:13:12.932871 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:12.935911 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:13:12.938703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:13:12.939582 augenrules[1566]: No rules Jan 17 12:13:12.939746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:13:12.940352 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:13:12.940647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:13:12.940727 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:13:12.947830 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:13:12.950232 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:13:12.950327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:13:12.955160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:12.960217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:13:12.962061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:13:12.962775 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:13:12.963700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:13:12.967021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:12.967494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:13:12.967594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:13:12.973227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:12.980055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:13:12.984927 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:13:12.985111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:13:12.985198 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:12.985663 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:13:12.985757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:13:12.989955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:13:12.990050 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:13:12.990948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:13:12.991077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:13:12.992195 systemd[1]: Finished ensure-sysext.service. Jan 17 12:13:12.999303 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:13:12.999397 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:13:12.999606 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:13:12.999635 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:13:13.015075 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:13:13.037541 systemd-resolved[1545]: Positive Trust Anchors: Jan 17 12:13:13.037548 systemd-resolved[1545]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:13:13.037571 systemd-resolved[1545]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:13:13.051897 systemd-resolved[1545]: Defaulting to hostname 'linux'. Jan 17 12:13:13.052501 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:13:13.052781 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:13:13.053575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:13:13.053709 systemd[1]: Reached target network.target - Network. Jan 17 12:13:13.053796 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:13:13.065550 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:13:13.066003 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:13:13.077909 ldconfig[1340]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:13:13.080041 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:13:13.085013 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:13:13.090581 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:13:13.091136 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:13:13.091364 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:13:13.091511 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:13:13.091734 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:13:13.091911 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:13:13.092045 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:13:13.092181 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:13:13.092206 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:13:13.092309 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:13:13.097798 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:13:13.098941 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:13:13.099668 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:13:13.102437 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:13:13.102559 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:13:13.102659 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:13:13.102843 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:13:13.102868 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:13:13.102883 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:13:13.104841 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:13:13.105968 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:13:13.112941 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:13:13.114795 jq[1614]: false Jan 17 12:13:13.115014 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:13:13.115176 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:13:13.117665 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:13:13.119518 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:13:13.127058 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:13:13.138167 extend-filesystems[1615]: Found loop4 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found loop5 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found loop6 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found loop7 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found sda Jan 17 12:13:13.138167 extend-filesystems[1615]: Found sda1 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found sda2 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found sda3 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found usr Jan 17 12:13:13.138167 extend-filesystems[1615]: Found sda4 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found sda6 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found sda7 Jan 17 12:13:13.138167 extend-filesystems[1615]: Found sda9 Jan 17 12:13:13.138167 extend-filesystems[1615]: Checking size of /dev/sda9 Jan 17 12:13:13.137038 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:13:13.136012 dbus-daemon[1612]: [system] SELinux support is enabled Jan 17 12:13:13.146017 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:13:13.146340 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:13:13.148165 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:13:13.151961 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:13:13.156905 extend-filesystems[1615]: Old size kept for /dev/sda9 Jan 17 12:13:13.156905 extend-filesystems[1615]: Found sr0 Jan 17 12:13:13.166684 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jan 17 12:13:13.167351 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:13:13.169137 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:13:13.169258 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:13:13.169395 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:13:13.169505 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:13:13.170290 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:13:13.170405 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:13:13.172998 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:13:13.173122 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:13:13.176575 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:13:13.176607 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:13:13.176780 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:13:13.176794 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:14:26.026633 jq[1638]: true Jan 17 12:14:26.043897 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1294) Jan 17 12:14:26.025014 systemd-resolved[1545]: Clock change detected. Flushing caches. Jan 17 12:14:26.043957 update_engine[1636]: I20250117 12:14:26.027256 1636 main.cc:92] Flatcar Update Engine starting Jan 17 12:14:26.043957 update_engine[1636]: I20250117 12:14:26.028040 1636 update_check_scheduler.cc:74] Next update check in 3m41s Jan 17 12:14:26.025102 systemd-timesyncd[1598]: Contacted time server 71.123.46.185:123 (0.flatcar.pool.ntp.org). Jan 17 12:14:26.025135 systemd-timesyncd[1598]: Initial clock synchronization to Fri 2025-01-17 12:14:26.024989 UTC. Jan 17 12:14:26.029475 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jan 17 12:14:26.031087 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:14:26.044549 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jan 17 12:14:26.046410 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:14:26.048736 jq[1658]: true Jan 17 12:14:26.055954 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:14:26.072355 (ntainerd)[1664]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:14:26.086899 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jan 17 12:14:26.088761 tar[1646]: linux-amd64/helm Jan 17 12:14:26.116046 bash[1686]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:14:26.117608 unknown[1655]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jan 17 12:14:26.118099 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:14:26.118854 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:14:26.121599 unknown[1655]: Core dump limit set to -1 Jan 17 12:14:26.124358 systemd-logind[1630]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:14:26.125174 systemd-logind[1630]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:14:26.125317 systemd-logind[1630]: New seat seat0. Jan 17 12:14:26.125952 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:14:26.139807 kernel: NET: Registered PF_VSOCK protocol family Jan 17 12:14:26.148120 sshd_keygen[1634]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:14:26.176720 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:14:26.185505 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:14:26.197010 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:14:26.197233 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:14:26.209227 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:14:26.216919 locksmithd[1662]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:14:26.227129 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:14:26.232114 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:14:26.234954 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:14:26.235159 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:14:26.369325 systemd-networkd[1293]: ens192: Gained IPv6LL Jan 17 12:14:26.617368 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:14:26.618157 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:14:26.626361 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jan 17 12:14:26.632455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:14:26.647414 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:14:26.688909 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:14:26.702377 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:14:26.702514 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jan 17 12:14:26.702863 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:14:26.708439 containerd[1664]: time="2025-01-17T12:14:26.707202581Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:14:26.729189 containerd[1664]: time="2025-01-17T12:14:26.729157337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:14:26.730120 containerd[1664]: time="2025-01-17T12:14:26.730093524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:14:26.730120 containerd[1664]: time="2025-01-17T12:14:26.730113957Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:14:26.730187 containerd[1664]: time="2025-01-17T12:14:26.730124498Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:14:26.730232 containerd[1664]: time="2025-01-17T12:14:26.730218216Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:14:26.730249 containerd[1664]: time="2025-01-17T12:14:26.730234522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:14:26.730286 containerd[1664]: time="2025-01-17T12:14:26.730272732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:14:26.730305 containerd[1664]: time="2025-01-17T12:14:26.730284737Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:14:26.730424 containerd[1664]: time="2025-01-17T12:14:26.730409565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:14:26.730440 containerd[1664]: time="2025-01-17T12:14:26.730422965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:14:26.730440 containerd[1664]: time="2025-01-17T12:14:26.730434485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:14:26.730468 containerd[1664]: time="2025-01-17T12:14:26.730440505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:14:26.730492 containerd[1664]: time="2025-01-17T12:14:26.730481541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:14:26.732297 containerd[1664]: time="2025-01-17T12:14:26.731023523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:14:26.732297 containerd[1664]: time="2025-01-17T12:14:26.731107452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:14:26.732297 containerd[1664]: time="2025-01-17T12:14:26.731117630Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:14:26.732297 containerd[1664]: time="2025-01-17T12:14:26.731165087Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:14:26.732297 containerd[1664]: time="2025-01-17T12:14:26.731193286Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.755889695Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.755934952Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.755945952Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.755955456Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.755970046Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.756075857Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.756249195Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.756305918Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.756315543Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.756323365Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.756331256Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.756339961Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.756346769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:14:26.756812 containerd[1664]: time="2025-01-17T12:14:26.756358135Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756367294Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756374378Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756381533Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756397978Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756410244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756419763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756427543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756438827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756446076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756452999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756459902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756468150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756475135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757056 containerd[1664]: time="2025-01-17T12:14:26.756484132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756490535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756496953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756504154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756512889Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756524605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756531165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756537118Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756561075Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756571863Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756577949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756587897Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756593908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756602647Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:14:26.757234 containerd[1664]: time="2025-01-17T12:14:26.756608615Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:14:26.757414 containerd[1664]: time="2025-01-17T12:14:26.756614819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:14:26.758015 containerd[1664]: time="2025-01-17T12:14:26.756770968Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:14:26.758104 containerd[1664]: time="2025-01-17T12:14:26.758021982Z" level=info msg="Connect containerd service" Jan 17 12:14:26.758104 containerd[1664]: time="2025-01-17T12:14:26.758051156Z" level=info msg="using legacy CRI server" Jan 17 12:14:26.758104 containerd[1664]: time="2025-01-17T12:14:26.758056612Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:14:26.758146 containerd[1664]: time="2025-01-17T12:14:26.758110123Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:14:26.759321 containerd[1664]: time="2025-01-17T12:14:26.758574965Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:14:26.759321 containerd[1664]: time="2025-01-17T12:14:26.759010998Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:14:26.759321 containerd[1664]: time="2025-01-17T12:14:26.759041543Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:14:26.759321 containerd[1664]: time="2025-01-17T12:14:26.759114857Z" level=info msg="Start subscribing containerd event" Jan 17 12:14:26.759321 containerd[1664]: time="2025-01-17T12:14:26.759242135Z" level=info msg="Start recovering state" Jan 17 12:14:26.759321 containerd[1664]: time="2025-01-17T12:14:26.759284222Z" level=info msg="Start event monitor" Jan 17 12:14:26.759321 containerd[1664]: time="2025-01-17T12:14:26.759296898Z" level=info msg="Start snapshots syncer" Jan 17 12:14:26.759321 containerd[1664]: time="2025-01-17T12:14:26.759302170Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:14:26.759321 containerd[1664]: time="2025-01-17T12:14:26.759306618Z" level=info msg="Start streaming server" Jan 17 12:14:26.760043 containerd[1664]: time="2025-01-17T12:14:26.759341625Z" level=info msg="containerd successfully booted in 0.054405s" Jan 17 12:14:26.759423 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:14:26.806336 tar[1646]: linux-amd64/LICENSE Jan 17 12:14:26.806566 tar[1646]: linux-amd64/README.md Jan 17 12:14:26.817158 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:14:28.625928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:14:28.626354 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:14:28.626669 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:14:28.627003 systemd[1]: Startup finished in 7.841s (kernel) + 6.392s (userspace) = 14.233s. Jan 17 12:14:28.767433 login[1717]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:14:28.767620 login[1718]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:14:28.775814 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:14:28.783055 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:14:28.785417 systemd-logind[1630]: New session 1 of user core. Jan 17 12:14:28.790222 systemd-logind[1630]: New session 2 of user core. Jan 17 12:14:28.795319 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:14:28.801049 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:14:28.804732 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:14:28.868088 systemd[1829]: Queued start job for default target default.target. Jan 17 12:14:28.868713 systemd[1829]: Created slice app.slice - User Application Slice. Jan 17 12:14:28.868781 systemd[1829]: Reached target paths.target - Paths. Jan 17 12:14:28.868842 systemd[1829]: Reached target timers.target - Timers. Jan 17 12:14:28.872881 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:14:28.877033 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:14:28.877084 systemd[1829]: Reached target sockets.target - Sockets. Jan 17 12:14:28.877094 systemd[1829]: Reached target basic.target - Basic System. Jan 17 12:14:28.877117 systemd[1829]: Reached target default.target - Main User Target. Jan 17 12:14:28.877133 systemd[1829]: Startup finished in 68ms. Jan 17 12:14:28.877202 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:14:28.879292 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:14:28.880331 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:14:29.382618 kubelet[1820]: E0117 12:14:29.382564 1820 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:14:29.384223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:14:29.384360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:14:39.589903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:14:39.601195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:14:39.842938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:14:39.845623 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:14:39.940569 kubelet[1882]: E0117 12:14:39.940534 1882 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:14:39.944902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:14:39.944995 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:14:50.089890 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:14:50.097908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:14:50.427518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:14:50.429205 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:14:50.462178 kubelet[1903]: E0117 12:14:50.462142 1903 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:14:50.465901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:14:50.466036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:15:00.589897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:15:00.599895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:00.928347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:00.929431 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:15:00.956301 kubelet[1922]: E0117 12:15:00.956259 1922 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:15:00.957611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:15:00.957798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:15:06.246744 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:15:06.253986 systemd[1]: Started sshd@0-139.178.70.104:22-147.75.109.163:39146.service - OpenSSH per-connection server daemon (147.75.109.163:39146). Jan 17 12:15:06.309927 sshd[1933]: Accepted publickey for core from 147.75.109.163 port 39146 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:15:06.310695 sshd[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:06.313318 systemd-logind[1630]: New session 3 of user core. Jan 17 12:15:06.323136 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:15:06.376868 systemd[1]: Started sshd@1-139.178.70.104:22-147.75.109.163:39160.service - OpenSSH per-connection server daemon (147.75.109.163:39160). Jan 17 12:15:06.401058 sshd[1938]: Accepted publickey for core from 147.75.109.163 port 39160 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:15:06.402865 sshd[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:06.406128 systemd-logind[1630]: New session 4 of user core. Jan 17 12:15:06.412953 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:15:06.463681 sshd[1938]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:06.468967 systemd[1]: Started sshd@2-139.178.70.104:22-147.75.109.163:39166.service - OpenSSH per-connection server daemon (147.75.109.163:39166). Jan 17 12:15:06.471142 systemd[1]: sshd@1-139.178.70.104:22-147.75.109.163:39160.service: Deactivated successfully. Jan 17 12:15:06.472057 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:15:06.473231 systemd-logind[1630]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:15:06.473916 systemd-logind[1630]: Removed session 4. Jan 17 12:15:06.495612 sshd[1943]: Accepted publickey for core from 147.75.109.163 port 39166 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:15:06.496425 sshd[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:06.499181 systemd-logind[1630]: New session 5 of user core. Jan 17 12:15:06.505987 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:15:06.552583 sshd[1943]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:06.558959 systemd[1]: Started sshd@3-139.178.70.104:22-147.75.109.163:39176.service - OpenSSH per-connection server daemon (147.75.109.163:39176). Jan 17 12:15:06.559403 systemd[1]: sshd@2-139.178.70.104:22-147.75.109.163:39166.service: Deactivated successfully. Jan 17 12:15:06.560565 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:15:06.561359 systemd-logind[1630]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:15:06.563814 systemd-logind[1630]: Removed session 5. Jan 17 12:15:06.584128 sshd[1951]: Accepted publickey for core from 147.75.109.163 port 39176 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:15:06.584779 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:06.587692 systemd-logind[1630]: New session 6 of user core. Jan 17 12:15:06.593062 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:15:06.643921 sshd[1951]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:06.654206 systemd[1]: Started sshd@4-139.178.70.104:22-147.75.109.163:39184.service - OpenSSH per-connection server daemon (147.75.109.163:39184). Jan 17 12:15:06.654960 systemd[1]: sshd@3-139.178.70.104:22-147.75.109.163:39176.service: Deactivated successfully. Jan 17 12:15:06.656705 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:15:06.657687 systemd-logind[1630]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:15:06.658992 systemd-logind[1630]: Removed session 6. Jan 17 12:15:06.682074 sshd[1959]: Accepted publickey for core from 147.75.109.163 port 39184 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:15:06.682947 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:06.685609 systemd-logind[1630]: New session 7 of user core. Jan 17 12:15:06.694929 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:15:06.800212 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:15:06.800383 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:15:06.815057 sudo[1966]: pam_unix(sudo:session): session closed for user root Jan 17 12:15:06.817536 sshd[1959]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:06.828976 systemd[1]: Started sshd@5-139.178.70.104:22-147.75.109.163:39196.service - OpenSSH per-connection server daemon (147.75.109.163:39196). Jan 17 12:15:06.829216 systemd[1]: sshd@4-139.178.70.104:22-147.75.109.163:39184.service: Deactivated successfully. Jan 17 12:15:06.831676 systemd-logind[1630]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:15:06.832132 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:15:06.833414 systemd-logind[1630]: Removed session 7. Jan 17 12:15:06.854556 sshd[1968]: Accepted publickey for core from 147.75.109.163 port 39196 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:15:06.855281 sshd[1968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:06.857555 systemd-logind[1630]: New session 8 of user core. Jan 17 12:15:06.865929 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:15:06.913229 sudo[1976]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:15:06.913594 sudo[1976]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:15:06.915470 sudo[1976]: pam_unix(sudo:session): session closed for user root Jan 17 12:15:06.918323 sudo[1975]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:15:06.918474 sudo[1975]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:15:06.930966 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:15:06.932066 auditctl[1979]: No rules Jan 17 12:15:06.932264 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:15:06.932386 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:15:06.934384 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:15:06.960952 augenrules[1998]: No rules Jan 17 12:15:06.961588 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:15:06.962317 sudo[1975]: pam_unix(sudo:session): session closed for user root Jan 17 12:15:06.963351 sshd[1968]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:06.967950 systemd[1]: Started sshd@6-139.178.70.104:22-147.75.109.163:39210.service - OpenSSH per-connection server daemon (147.75.109.163:39210). Jan 17 12:15:06.968836 systemd[1]: sshd@5-139.178.70.104:22-147.75.109.163:39196.service: Deactivated successfully. Jan 17 12:15:06.971482 systemd-logind[1630]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:15:06.971937 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:15:06.972941 systemd-logind[1630]: Removed session 8. Jan 17 12:15:06.993467 sshd[2004]: Accepted publickey for core from 147.75.109.163 port 39210 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:15:06.994163 sshd[2004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:06.996526 systemd-logind[1630]: New session 9 of user core. Jan 17 12:15:06.999922 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:15:07.048935 sudo[2011]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:15:07.049107 sudo[2011]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:15:07.365942 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:15:07.366050 (dockerd)[2026]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:15:07.633108 dockerd[2026]: time="2025-01-17T12:15:07.632841590Z" level=info msg="Starting up" Jan 17 12:15:07.694262 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1739791209-merged.mount: Deactivated successfully. Jan 17 12:15:07.715215 dockerd[2026]: time="2025-01-17T12:15:07.715191208Z" level=info msg="Loading containers: start." Jan 17 12:15:07.776804 kernel: Initializing XFRM netlink socket Jan 17 12:15:07.821632 systemd-networkd[1293]: docker0: Link UP Jan 17 12:15:07.834641 dockerd[2026]: time="2025-01-17T12:15:07.834613952Z" level=info msg="Loading containers: done." Jan 17 12:15:07.843117 dockerd[2026]: time="2025-01-17T12:15:07.843088967Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:15:07.843208 dockerd[2026]: time="2025-01-17T12:15:07.843163989Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:15:07.843229 dockerd[2026]: time="2025-01-17T12:15:07.843222430Z" level=info msg="Daemon has completed initialization" Jan 17 12:15:07.861243 dockerd[2026]: time="2025-01-17T12:15:07.860771761Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:15:07.862036 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:15:08.716757 containerd[1664]: time="2025-01-17T12:15:08.716727024Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:15:09.227224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388330144.mount: Deactivated successfully. Jan 17 12:15:10.239706 containerd[1664]: time="2025-01-17T12:15:10.239670994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:10.242333 containerd[1664]: time="2025-01-17T12:15:10.241026107Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:10.242333 containerd[1664]: time="2025-01-17T12:15:10.241057816Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 17 12:15:10.243971 containerd[1664]: time="2025-01-17T12:15:10.243725222Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 1.526961692s" Jan 17 12:15:10.243971 containerd[1664]: time="2025-01-17T12:15:10.243746082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:15:10.244161 containerd[1664]: time="2025-01-17T12:15:10.244149120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:10.256131 containerd[1664]: time="2025-01-17T12:15:10.256110561Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:15:11.089944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:15:11.095131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:11.167678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:11.170526 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:15:11.245808 kubelet[2245]: E0117 12:15:11.244813 2245 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:15:11.246404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:15:11.246521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:15:11.398049 update_engine[1636]: I20250117 12:15:11.397818 1636 update_attempter.cc:509] Updating boot flags... Jan 17 12:15:11.433811 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2265) Jan 17 12:15:11.524804 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2264) Jan 17 12:15:11.623806 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2264) Jan 17 12:15:11.794024 containerd[1664]: time="2025-01-17T12:15:11.793856914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:11.801703 containerd[1664]: time="2025-01-17T12:15:11.801680734Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 17 12:15:11.807640 containerd[1664]: time="2025-01-17T12:15:11.807624878Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:11.815151 containerd[1664]: time="2025-01-17T12:15:11.815125642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:11.815894 containerd[1664]: time="2025-01-17T12:15:11.815871982Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 1.559614473s" Jan 17 12:15:11.815967 containerd[1664]: time="2025-01-17T12:15:11.815952248Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:15:11.832529 containerd[1664]: time="2025-01-17T12:15:11.832451018Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:15:12.884848 containerd[1664]: time="2025-01-17T12:15:12.884800819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:12.891803 containerd[1664]: time="2025-01-17T12:15:12.891766408Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 17 12:15:12.899692 containerd[1664]: time="2025-01-17T12:15:12.899661041Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:12.910220 containerd[1664]: time="2025-01-17T12:15:12.910175195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:12.913832 containerd[1664]: time="2025-01-17T12:15:12.912499384Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.079965553s" Jan 17 12:15:12.913832 containerd[1664]: time="2025-01-17T12:15:12.912535973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:15:12.930458 containerd[1664]: time="2025-01-17T12:15:12.930426691Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:15:14.251620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833438785.mount: Deactivated successfully. Jan 17 12:15:14.925204 containerd[1664]: time="2025-01-17T12:15:14.925164335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:14.929156 containerd[1664]: time="2025-01-17T12:15:14.929104129Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:15:14.933533 containerd[1664]: time="2025-01-17T12:15:14.933501322Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:14.942489 containerd[1664]: time="2025-01-17T12:15:14.942452472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:14.943040 containerd[1664]: time="2025-01-17T12:15:14.942743745Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.012287808s" Jan 17 12:15:14.943040 containerd[1664]: time="2025-01-17T12:15:14.942764653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:15:14.958550 containerd[1664]: time="2025-01-17T12:15:14.958342391Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:15:15.578900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172696619.mount: Deactivated successfully. Jan 17 12:15:16.540836 containerd[1664]: time="2025-01-17T12:15:16.540768385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:16.547634 containerd[1664]: time="2025-01-17T12:15:16.547587553Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:15:16.554653 containerd[1664]: time="2025-01-17T12:15:16.554619369Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:16.561711 containerd[1664]: time="2025-01-17T12:15:16.561655261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:16.563249 containerd[1664]: time="2025-01-17T12:15:16.563020847Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.604653545s" Jan 17 12:15:16.563249 containerd[1664]: time="2025-01-17T12:15:16.563055080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:15:16.583679 containerd[1664]: time="2025-01-17T12:15:16.583650025Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:15:17.113988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242486750.mount: Deactivated successfully. Jan 17 12:15:17.153607 containerd[1664]: time="2025-01-17T12:15:17.152995998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:17.161447 containerd[1664]: time="2025-01-17T12:15:17.161421239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:15:17.169739 containerd[1664]: time="2025-01-17T12:15:17.169704986Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:17.178507 containerd[1664]: time="2025-01-17T12:15:17.178482513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:17.179140 containerd[1664]: time="2025-01-17T12:15:17.179120489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 595.445662ms" Jan 17 12:15:17.179216 containerd[1664]: time="2025-01-17T12:15:17.179203185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:15:17.201442 containerd[1664]: time="2025-01-17T12:15:17.201422373Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:15:17.929413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585895796.mount: Deactivated successfully. Jan 17 12:15:21.210891 containerd[1664]: time="2025-01-17T12:15:21.210858713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:21.218601 containerd[1664]: time="2025-01-17T12:15:21.218522721Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 17 12:15:21.230704 containerd[1664]: time="2025-01-17T12:15:21.230667521Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:21.244401 containerd[1664]: time="2025-01-17T12:15:21.244376698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:21.245803 containerd[1664]: time="2025-01-17T12:15:21.245284997Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.04371438s" Jan 17 12:15:21.245803 containerd[1664]: time="2025-01-17T12:15:21.245307427Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:15:21.339901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 12:15:21.348984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:22.258900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:22.261003 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:15:22.556839 kubelet[2428]: E0117 12:15:22.556730 2428 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:15:22.558436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:15:22.558559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:15:27.886378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:27.890924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:27.904890 systemd[1]: Reloading requested from client PID 2492 ('systemctl') (unit session-9.scope)... Jan 17 12:15:27.905041 systemd[1]: Reloading... Jan 17 12:15:27.957807 zram_generator::config[2528]: No configuration found. Jan 17 12:15:28.021368 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:15:28.036907 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:15:28.077807 systemd[1]: Reloading finished in 172 ms. Jan 17 12:15:28.132478 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:15:28.132616 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:15:28.132846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:28.145976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:28.375296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:28.377176 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:15:28.417695 kubelet[2606]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:15:28.417695 kubelet[2606]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:15:28.417695 kubelet[2606]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:15:28.417957 kubelet[2606]: I0117 12:15:28.417685 2606 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:15:28.729863 kubelet[2606]: I0117 12:15:28.729805 2606 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:15:28.729863 kubelet[2606]: I0117 12:15:28.729826 2606 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:15:28.730069 kubelet[2606]: I0117 12:15:28.729959 2606 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:15:28.898228 kubelet[2606]: I0117 12:15:28.898005 2606 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:15:28.903327 kubelet[2606]: E0117 12:15:28.903315 2606 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:28.950925 kubelet[2606]: I0117 12:15:28.950909 2606 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:15:28.972404 kubelet[2606]: I0117 12:15:28.972242 2606 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:15:28.983141 kubelet[2606]: I0117 12:15:28.983026 2606 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:15:28.992842 kubelet[2606]: I0117 12:15:28.992734 2606 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:15:28.992842 kubelet[2606]: I0117 12:15:28.992755 2606 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:15:28.992920 kubelet[2606]: I0117 12:15:28.992877 2606 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:15:28.992970 kubelet[2606]: I0117 12:15:28.992962 2606 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:15:28.993002 kubelet[2606]: I0117 12:15:28.992981 2606 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:15:28.993380 kubelet[2606]: W0117 12:15:28.993328 2606 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:28.993380 kubelet[2606]: E0117 12:15:28.993366 2606 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:29.002236 kubelet[2606]: I0117 12:15:29.002044 2606 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:15:29.002236 kubelet[2606]: I0117 12:15:29.002074 2606 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:15:29.007350 kubelet[2606]: W0117 12:15:29.007139 2606 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:29.007350 kubelet[2606]: E0117 12:15:29.007168 2606 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:29.007690 kubelet[2606]: I0117 12:15:29.007477 2606 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:15:29.023439 kubelet[2606]: I0117 12:15:29.023329 2606 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:15:29.031739 kubelet[2606]: W0117 12:15:29.031728 2606 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:15:29.032335 kubelet[2606]: I0117 12:15:29.032212 2606 server.go:1256] "Started kubelet" Jan 17 12:15:29.032335 kubelet[2606]: I0117 12:15:29.032290 2606 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:15:29.032996 kubelet[2606]: I0117 12:15:29.032980 2606 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:15:29.054506 kubelet[2606]: I0117 12:15:29.054485 2606 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:15:29.057854 kubelet[2606]: I0117 12:15:29.057842 2606 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:15:29.057893 kubelet[2606]: I0117 12:15:29.055701 2606 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:15:29.059035 kubelet[2606]: I0117 12:15:29.058302 2606 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:15:29.059035 kubelet[2606]: I0117 12:15:29.058535 2606 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:15:29.059035 kubelet[2606]: I0117 12:15:29.058590 2606 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:15:29.059035 kubelet[2606]: W0117 12:15:29.058874 2606 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:29.059035 kubelet[2606]: E0117 12:15:29.058897 2606 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:29.059035 kubelet[2606]: E0117 12:15:29.058935 2606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="200ms" Jan 17 12:15:29.064572 kubelet[2606]: I0117 12:15:29.064564 2606 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:15:29.064697 kubelet[2606]: I0117 12:15:29.064689 2606 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:15:29.070860 kubelet[2606]: E0117 12:15:29.070851 2606 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b79ea6c90c270 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:15:29.032192624 +0000 UTC m=+0.652648537,LastTimestamp:2025-01-17 12:15:29.032192624 +0000 UTC m=+0.652648537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:15:29.076891 kubelet[2606]: I0117 12:15:29.076878 2606 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:15:29.095026 kubelet[2606]: E0117 12:15:29.095014 2606 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:15:29.108773 kubelet[2606]: I0117 12:15:29.108760 2606 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:15:29.108877 kubelet[2606]: I0117 12:15:29.108868 2606 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:15:29.108932 kubelet[2606]: I0117 12:15:29.108926 2606 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:15:29.110040 kubelet[2606]: I0117 12:15:29.110029 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:15:29.110862 kubelet[2606]: I0117 12:15:29.110852 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:15:29.116396 kubelet[2606]: I0117 12:15:29.116385 2606 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:15:29.119812 kubelet[2606]: I0117 12:15:29.116439 2606 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:15:29.119812 kubelet[2606]: E0117 12:15:29.116477 2606 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:15:29.120438 kubelet[2606]: I0117 12:15:29.120427 2606 policy_none.go:49] "None policy: Start" Jan 17 12:15:29.121123 kubelet[2606]: W0117 12:15:29.120874 2606 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:29.121123 kubelet[2606]: E0117 12:15:29.120896 2606 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:29.121253 kubelet[2606]: I0117 12:15:29.121244 2606 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:15:29.121366 kubelet[2606]: I0117 12:15:29.121359 2606 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:15:29.132299 kubelet[2606]: I0117 12:15:29.132269 2606 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:15:29.132513 kubelet[2606]: I0117 12:15:29.132483 2606 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:15:29.134855 kubelet[2606]: E0117 12:15:29.134837 2606 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:15:29.160407 kubelet[2606]: I0117 12:15:29.160034 2606 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:15:29.160407 kubelet[2606]: E0117 12:15:29.160347 2606 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 17 12:15:29.216931 kubelet[2606]: I0117 12:15:29.216904 2606 topology_manager.go:215] "Topology Admit Handler" podUID="9fcc719183ffc688afb443e8c7b01ea4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:15:29.218353 kubelet[2606]: I0117 12:15:29.218332 2606 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:15:29.220294 kubelet[2606]: I0117 12:15:29.220250 2606 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:15:29.259721 kubelet[2606]: E0117 12:15:29.259641 2606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="400ms" Jan 17 12:15:29.261674 kubelet[2606]: I0117 12:15:29.261598 2606 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:29.261674 kubelet[2606]: I0117 12:15:29.261630 2606 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fcc719183ffc688afb443e8c7b01ea4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9fcc719183ffc688afb443e8c7b01ea4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:15:29.261674 kubelet[2606]: I0117 12:15:29.261663 2606 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:29.261769 kubelet[2606]: I0117 12:15:29.261684 2606 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:29.261769 kubelet[2606]: I0117 12:15:29.261702 2606 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:29.261876 kubelet[2606]: I0117 12:15:29.261848 2606 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:29.261931 kubelet[2606]: I0117 12:15:29.261918 2606 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:15:29.261992 kubelet[2606]: I0117 12:15:29.261953 2606 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fcc719183ffc688afb443e8c7b01ea4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9fcc719183ffc688afb443e8c7b01ea4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:15:29.262019 kubelet[2606]: I0117 12:15:29.262008 2606 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fcc719183ffc688afb443e8c7b01ea4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9fcc719183ffc688afb443e8c7b01ea4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:15:29.362297 kubelet[2606]: I0117 12:15:29.362023 2606 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:15:29.362372 kubelet[2606]: E0117 12:15:29.362313 2606 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 17 12:15:29.525548 containerd[1664]: time="2025-01-17T12:15:29.525457488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9fcc719183ffc688afb443e8c7b01ea4,Namespace:kube-system,Attempt:0,}" Jan 17 12:15:29.531524 containerd[1664]: time="2025-01-17T12:15:29.531470517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,}" Jan 17 12:15:29.531673 containerd[1664]: time="2025-01-17T12:15:29.531517145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,}" Jan 17 12:15:29.660606 kubelet[2606]: E0117 12:15:29.660582 2606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="800ms" Jan 17 12:15:29.763672 kubelet[2606]: I0117 12:15:29.763653 2606 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:15:29.763899 kubelet[2606]: E0117 12:15:29.763889 2606 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 17 12:15:30.064650 kubelet[2606]: W0117 12:15:30.064597 2606 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:30.064650 kubelet[2606]: E0117 12:15:30.064636 2606 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:30.213716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822953101.mount: Deactivated successfully. Jan 17 12:15:30.249560 containerd[1664]: time="2025-01-17T12:15:30.249523810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:15:30.261572 kubelet[2606]: W0117 12:15:30.261537 2606 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:30.261572 kubelet[2606]: E0117 12:15:30.261575 2606 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:30.268059 containerd[1664]: time="2025-01-17T12:15:30.268027085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:15:30.272356 containerd[1664]: time="2025-01-17T12:15:30.272335763Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:15:30.277125 containerd[1664]: time="2025-01-17T12:15:30.277103107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:15:30.285423 containerd[1664]: time="2025-01-17T12:15:30.285397410Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:15:30.289505 containerd[1664]: time="2025-01-17T12:15:30.289476585Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:15:30.292473 containerd[1664]: time="2025-01-17T12:15:30.292438692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:15:30.294893 containerd[1664]: time="2025-01-17T12:15:30.294879952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:15:30.296187 containerd[1664]: time="2025-01-17T12:15:30.295258571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 769.756233ms" Jan 17 12:15:30.296187 containerd[1664]: time="2025-01-17T12:15:30.296156180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 764.480323ms" Jan 17 12:15:30.298253 containerd[1664]: time="2025-01-17T12:15:30.298240002Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 766.636587ms" Jan 17 12:15:30.323075 kubelet[2606]: W0117 12:15:30.322996 2606 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:30.323075 kubelet[2606]: E0117 12:15:30.323037 2606 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:30.389897 kubelet[2606]: W0117 12:15:30.389862 2606 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:30.389897 kubelet[2606]: E0117 12:15:30.389900 2606 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:30.461021 kubelet[2606]: E0117 12:15:30.461000 2606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="1.6s" Jan 17 12:15:30.565409 kubelet[2606]: I0117 12:15:30.565272 2606 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:15:30.565556 kubelet[2606]: E0117 12:15:30.565547 2606 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 17 12:15:30.592605 containerd[1664]: time="2025-01-17T12:15:30.592566600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:15:30.602343 containerd[1664]: time="2025-01-17T12:15:30.596067427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:15:30.602343 containerd[1664]: time="2025-01-17T12:15:30.596092852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:15:30.602343 containerd[1664]: time="2025-01-17T12:15:30.596099661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:30.602343 containerd[1664]: time="2025-01-17T12:15:30.596152976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:30.603594 containerd[1664]: time="2025-01-17T12:15:30.603350592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:15:30.603594 containerd[1664]: time="2025-01-17T12:15:30.603386396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:30.603594 containerd[1664]: time="2025-01-17T12:15:30.603503719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:15:30.603594 containerd[1664]: time="2025-01-17T12:15:30.603538754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:15:30.603594 containerd[1664]: time="2025-01-17T12:15:30.603557428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:30.604111 containerd[1664]: time="2025-01-17T12:15:30.603608610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:30.604863 containerd[1664]: time="2025-01-17T12:15:30.604845623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:30.672803 containerd[1664]: time="2025-01-17T12:15:30.672759075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9fcc719183ffc688afb443e8c7b01ea4,Namespace:kube-system,Attempt:0,} returns sandbox id \"002a6e9b0927c8eace24740abb524db6171b0b225c1246c7f6950fe42d6d7178\"" Jan 17 12:15:30.674103 containerd[1664]: time="2025-01-17T12:15:30.674054893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"269061448dc327fbd87704a8bd59b3539cc5b211d528b7b1278900947b02a762\"" Jan 17 12:15:30.676090 containerd[1664]: time="2025-01-17T12:15:30.676070954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,} returns sandbox id \"999bb5da1835e8dc66ea594560f2a4668a5b1100b3d539f9da977bbddb7e4c38\"" Jan 17 12:15:30.686419 containerd[1664]: time="2025-01-17T12:15:30.686344285Z" level=info msg="CreateContainer within sandbox \"269061448dc327fbd87704a8bd59b3539cc5b211d528b7b1278900947b02a762\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:15:30.686506 containerd[1664]: time="2025-01-17T12:15:30.686402753Z" level=info msg="CreateContainer within sandbox \"002a6e9b0927c8eace24740abb524db6171b0b225c1246c7f6950fe42d6d7178\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:15:30.686699 containerd[1664]: time="2025-01-17T12:15:30.686347744Z" level=info msg="CreateContainer within sandbox \"999bb5da1835e8dc66ea594560f2a4668a5b1100b3d539f9da977bbddb7e4c38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:15:31.131245 kubelet[2606]: E0117 12:15:31.131222 2606 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.104:6443: connect: connection refused Jan 17 12:15:31.589122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4024768084.mount: Deactivated successfully. Jan 17 12:15:31.593172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693901969.mount: Deactivated successfully. Jan 17 12:15:31.656530 containerd[1664]: time="2025-01-17T12:15:31.656303008Z" level=info msg="CreateContainer within sandbox \"269061448dc327fbd87704a8bd59b3539cc5b211d528b7b1278900947b02a762\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8ff882d707435bba7242f1889e76a0e80f9767899bfcba8038e42c639ac87f3\"" Jan 17 12:15:31.656971 containerd[1664]: time="2025-01-17T12:15:31.656855284Z" level=info msg="StartContainer for \"e8ff882d707435bba7242f1889e76a0e80f9767899bfcba8038e42c639ac87f3\"" Jan 17 12:15:31.680190 containerd[1664]: time="2025-01-17T12:15:31.679698115Z" level=info msg="CreateContainer within sandbox \"999bb5da1835e8dc66ea594560f2a4668a5b1100b3d539f9da977bbddb7e4c38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"55052b4440983b2de0e87b4de2cc13f0a697ace78c2aba3c7426e1902b8bfadf\"" Jan 17 12:15:31.680432 containerd[1664]: time="2025-01-17T12:15:31.680414467Z" level=info msg="StartContainer for \"55052b4440983b2de0e87b4de2cc13f0a697ace78c2aba3c7426e1902b8bfadf\"" Jan 17 12:15:31.685753 containerd[1664]: time="2025-01-17T12:15:31.685571546Z" level=info msg="CreateContainer within sandbox \"002a6e9b0927c8eace24740abb524db6171b0b225c1246c7f6950fe42d6d7178\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d2cb8297384bfe07b5256b85f91bb18eb16bb5456dfa077198b3708384a3d61d\"" Jan 17 12:15:31.686720 containerd[1664]: time="2025-01-17T12:15:31.686678759Z" level=info msg="StartContainer for \"d2cb8297384bfe07b5256b85f91bb18eb16bb5456dfa077198b3708384a3d61d\"" Jan 17 12:15:31.724115 containerd[1664]: time="2025-01-17T12:15:31.722707423Z" level=info msg="StartContainer for \"e8ff882d707435bba7242f1889e76a0e80f9767899bfcba8038e42c639ac87f3\" returns successfully" Jan 17 12:15:31.747581 containerd[1664]: time="2025-01-17T12:15:31.747494831Z" level=info msg="StartContainer for \"55052b4440983b2de0e87b4de2cc13f0a697ace78c2aba3c7426e1902b8bfadf\" returns successfully" Jan 17 12:15:31.766231 containerd[1664]: time="2025-01-17T12:15:31.766005190Z" level=info msg="StartContainer for \"d2cb8297384bfe07b5256b85f91bb18eb16bb5456dfa077198b3708384a3d61d\" returns successfully" Jan 17 12:15:32.061612 kubelet[2606]: E0117 12:15:32.061592 2606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="3.2s" Jan 17 12:15:32.168085 kubelet[2606]: I0117 12:15:32.168064 2606 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:15:32.168398 kubelet[2606]: E0117 12:15:32.168287 2606 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 17 12:15:33.536826 kubelet[2606]: E0117 12:15:33.536801 2606 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:15:33.881942 kubelet[2606]: E0117 12:15:33.881917 2606 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:15:34.010128 kubelet[2606]: I0117 12:15:34.009058 2606 apiserver.go:52] "Watching apiserver" Jan 17 12:15:34.059278 kubelet[2606]: I0117 12:15:34.059145 2606 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:15:34.309521 kubelet[2606]: E0117 12:15:34.309434 2606 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:15:35.243368 kubelet[2606]: E0117 12:15:35.243343 2606 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:15:35.264049 kubelet[2606]: E0117 12:15:35.264023 2606 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:15:35.369463 kubelet[2606]: I0117 12:15:35.369444 2606 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:15:35.376141 kubelet[2606]: I0117 12:15:35.375648 2606 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:15:36.044195 systemd[1]: Reloading requested from client PID 2878 ('systemctl') (unit session-9.scope)... Jan 17 12:15:36.044425 systemd[1]: Reloading... Jan 17 12:15:36.103353 zram_generator::config[2922]: No configuration found. Jan 17 12:15:36.163464 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:15:36.178448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:15:36.222456 systemd[1]: Reloading finished in 177 ms. Jan 17 12:15:36.241876 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:36.242236 kubelet[2606]: I0117 12:15:36.242034 2606 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:15:36.256366 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:15:36.256550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:36.261133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:36.424912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:36.427845 (kubelet)[2993]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:15:36.526378 kubelet[2993]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:15:36.526378 kubelet[2993]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:15:36.526378 kubelet[2993]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:15:36.532260 kubelet[2993]: I0117 12:15:36.532208 2993 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:15:36.537474 kubelet[2993]: I0117 12:15:36.537430 2993 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:15:36.537474 kubelet[2993]: I0117 12:15:36.537446 2993 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:15:36.537741 kubelet[2993]: I0117 12:15:36.537734 2993 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:15:36.539749 kubelet[2993]: I0117 12:15:36.539728 2993 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:15:36.545314 kubelet[2993]: I0117 12:15:36.545291 2993 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:15:36.554473 kubelet[2993]: I0117 12:15:36.554442 2993 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:15:36.554758 kubelet[2993]: I0117 12:15:36.554747 2993 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:15:36.554878 kubelet[2993]: I0117 12:15:36.554865 2993 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:15:36.554941 kubelet[2993]: I0117 12:15:36.554885 2993 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:15:36.554941 kubelet[2993]: I0117 12:15:36.554892 2993 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:15:36.554941 kubelet[2993]: I0117 12:15:36.554918 2993 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:15:36.555000 kubelet[2993]: I0117 12:15:36.554974 2993 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:15:36.555000 kubelet[2993]: I0117 12:15:36.554983 2993 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:15:36.555000 kubelet[2993]: I0117 12:15:36.554998 2993 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:15:36.555425 kubelet[2993]: I0117 12:15:36.555010 2993 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:15:36.555761 kubelet[2993]: I0117 12:15:36.555753 2993 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:15:36.555916 kubelet[2993]: I0117 12:15:36.555909 2993 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:15:36.556206 kubelet[2993]: I0117 12:15:36.556198 2993 server.go:1256] "Started kubelet" Jan 17 12:15:36.566978 kubelet[2993]: I0117 12:15:36.566964 2993 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:15:36.576626 kubelet[2993]: I0117 12:15:36.576610 2993 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:15:36.577616 kubelet[2993]: I0117 12:15:36.577264 2993 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:15:36.584610 kubelet[2993]: I0117 12:15:36.584586 2993 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:15:36.584715 kubelet[2993]: I0117 12:15:36.584705 2993 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:15:36.587886 kubelet[2993]: E0117 12:15:36.587872 2993 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:15:36.588801 kubelet[2993]: I0117 12:15:36.588773 2993 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:15:36.589309 kubelet[2993]: I0117 12:15:36.589053 2993 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:15:36.589309 kubelet[2993]: I0117 12:15:36.589149 2993 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:15:36.591433 kubelet[2993]: I0117 12:15:36.591421 2993 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:15:36.591990 kubelet[2993]: I0117 12:15:36.591847 2993 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:15:36.594778 kubelet[2993]: I0117 12:15:36.594671 2993 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:15:36.595806 kubelet[2993]: I0117 12:15:36.595189 2993 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:15:36.603408 kubelet[2993]: I0117 12:15:36.603391 2993 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:15:36.604369 kubelet[2993]: I0117 12:15:36.604346 2993 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:15:36.604409 kubelet[2993]: I0117 12:15:36.604377 2993 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:15:36.604432 kubelet[2993]: E0117 12:15:36.604421 2993 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:15:36.644761 kubelet[2993]: I0117 12:15:36.644741 2993 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:15:36.644918 kubelet[2993]: I0117 12:15:36.644912 2993 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:15:36.644970 kubelet[2993]: I0117 12:15:36.644966 2993 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:15:36.645116 kubelet[2993]: I0117 12:15:36.645110 2993 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:15:36.645161 kubelet[2993]: I0117 12:15:36.645157 2993 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:15:36.645203 kubelet[2993]: I0117 12:15:36.645197 2993 policy_none.go:49] "None policy: Start" Jan 17 12:15:36.645568 kubelet[2993]: I0117 12:15:36.645560 2993 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:15:36.645663 kubelet[2993]: I0117 12:15:36.645658 2993 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:15:36.645807 kubelet[2993]: I0117 12:15:36.645798 2993 state_mem.go:75] "Updated machine memory state" Jan 17 12:15:36.646581 kubelet[2993]: I0117 12:15:36.646574 2993 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:15:36.646783 kubelet[2993]: I0117 12:15:36.646777 2993 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:15:36.690672 kubelet[2993]: I0117 12:15:36.690533 2993 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:15:36.705658 kubelet[2993]: I0117 12:15:36.705028 2993 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:15:36.705658 kubelet[2993]: I0117 12:15:36.705103 2993 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:15:36.705658 kubelet[2993]: I0117 12:15:36.705132 2993 topology_manager.go:215] "Topology Admit Handler" podUID="9fcc719183ffc688afb443e8c7b01ea4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:15:36.707077 kubelet[2993]: I0117 12:15:36.707056 2993 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:15:36.707166 kubelet[2993]: I0117 12:15:36.707118 2993 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:15:36.889925 kubelet[2993]: I0117 12:15:36.889895 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:15:36.889925 kubelet[2993]: I0117 12:15:36.889928 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fcc719183ffc688afb443e8c7b01ea4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9fcc719183ffc688afb443e8c7b01ea4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:15:36.890071 kubelet[2993]: I0117 12:15:36.889944 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fcc719183ffc688afb443e8c7b01ea4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9fcc719183ffc688afb443e8c7b01ea4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:15:36.890071 kubelet[2993]: I0117 12:15:36.889956 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:36.890071 kubelet[2993]: I0117 12:15:36.889968 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:36.890071 kubelet[2993]: I0117 12:15:36.889978 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:36.890071 kubelet[2993]: I0117 12:15:36.890011 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:36.890163 kubelet[2993]: I0117 12:15:36.890025 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fcc719183ffc688afb443e8c7b01ea4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9fcc719183ffc688afb443e8c7b01ea4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:15:36.890163 kubelet[2993]: I0117 12:15:36.890036 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:15:37.555314 kubelet[2993]: I0117 12:15:37.555252 2993 apiserver.go:52] "Watching apiserver" Jan 17 12:15:37.589853 kubelet[2993]: I0117 12:15:37.589823 2993 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:15:37.676801 kubelet[2993]: E0117 12:15:37.676170 2993 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:15:37.701365 kubelet[2993]: I0117 12:15:37.700426 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.697564753 podStartE2EDuration="1.697564753s" podCreationTimestamp="2025-01-17 12:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:15:37.693554841 +0000 UTC m=+1.206662866" watchObservedRunningTime="2025-01-17 12:15:37.697564753 +0000 UTC m=+1.210672774" Jan 17 12:15:37.712992 kubelet[2993]: I0117 12:15:37.712924 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.71289765 podStartE2EDuration="1.71289765s" podCreationTimestamp="2025-01-17 12:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:15:37.708940036 +0000 UTC m=+1.222048065" watchObservedRunningTime="2025-01-17 12:15:37.71289765 +0000 UTC m=+1.226005672" Jan 17 12:15:40.571477 sudo[2011]: pam_unix(sudo:session): session closed for user root Jan 17 12:15:40.573378 sshd[2004]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:40.574917 systemd-logind[1630]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:15:40.575054 systemd[1]: sshd@6-139.178.70.104:22-147.75.109.163:39210.service: Deactivated successfully. Jan 17 12:15:40.576972 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:15:40.577751 systemd-logind[1630]: Removed session 9. Jan 17 12:15:41.344203 kubelet[2993]: I0117 12:15:41.344082 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.344012624 podStartE2EDuration="5.344012624s" podCreationTimestamp="2025-01-17 12:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:15:37.713007062 +0000 UTC m=+1.226115090" watchObservedRunningTime="2025-01-17 12:15:41.344012624 +0000 UTC m=+4.857120653" Jan 17 12:15:50.692486 kubelet[2993]: I0117 12:15:50.692375 2993 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:15:50.699800 containerd[1664]: time="2025-01-17T12:15:50.699762767Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:15:50.700110 kubelet[2993]: I0117 12:15:50.699904 2993 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:15:51.532067 kubelet[2993]: I0117 12:15:51.531917 2993 topology_manager.go:215] "Topology Admit Handler" podUID="7bb697d9-0041-4d64-8efc-8b343cf51e60" podNamespace="kube-system" podName="kube-proxy-jj6nn" Jan 17 12:15:51.590113 kubelet[2993]: I0117 12:15:51.590065 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bb697d9-0041-4d64-8efc-8b343cf51e60-kube-proxy\") pod \"kube-proxy-jj6nn\" (UID: \"7bb697d9-0041-4d64-8efc-8b343cf51e60\") " pod="kube-system/kube-proxy-jj6nn" Jan 17 12:15:51.590113 kubelet[2993]: I0117 12:15:51.590093 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bb697d9-0041-4d64-8efc-8b343cf51e60-xtables-lock\") pod \"kube-proxy-jj6nn\" (UID: \"7bb697d9-0041-4d64-8efc-8b343cf51e60\") " pod="kube-system/kube-proxy-jj6nn" Jan 17 12:15:51.590299 kubelet[2993]: I0117 12:15:51.590262 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgdcj\" (UniqueName: \"kubernetes.io/projected/7bb697d9-0041-4d64-8efc-8b343cf51e60-kube-api-access-fgdcj\") pod \"kube-proxy-jj6nn\" (UID: \"7bb697d9-0041-4d64-8efc-8b343cf51e60\") " pod="kube-system/kube-proxy-jj6nn" Jan 17 12:15:51.590299 kubelet[2993]: I0117 12:15:51.590280 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bb697d9-0041-4d64-8efc-8b343cf51e60-lib-modules\") pod \"kube-proxy-jj6nn\" (UID: \"7bb697d9-0041-4d64-8efc-8b343cf51e60\") " pod="kube-system/kube-proxy-jj6nn" Jan 17 12:15:51.621178 kubelet[2993]: I0117 12:15:51.619285 2993 topology_manager.go:215] "Topology Admit Handler" podUID="ada341c5-b680-4c1e-935d-ed92a2d1495f" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-f8f99" Jan 17 12:15:51.690432 kubelet[2993]: I0117 12:15:51.690407 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ada341c5-b680-4c1e-935d-ed92a2d1495f-var-lib-calico\") pod \"tigera-operator-c7ccbd65-f8f99\" (UID: \"ada341c5-b680-4c1e-935d-ed92a2d1495f\") " pod="tigera-operator/tigera-operator-c7ccbd65-f8f99" Jan 17 12:15:51.690557 kubelet[2993]: I0117 12:15:51.690550 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7s2m\" (UniqueName: \"kubernetes.io/projected/ada341c5-b680-4c1e-935d-ed92a2d1495f-kube-api-access-k7s2m\") pod \"tigera-operator-c7ccbd65-f8f99\" (UID: \"ada341c5-b680-4c1e-935d-ed92a2d1495f\") " pod="tigera-operator/tigera-operator-c7ccbd65-f8f99" Jan 17 12:15:51.862650 containerd[1664]: time="2025-01-17T12:15:51.862619856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj6nn,Uid:7bb697d9-0041-4d64-8efc-8b343cf51e60,Namespace:kube-system,Attempt:0,}" Jan 17 12:15:51.921418 containerd[1664]: time="2025-01-17T12:15:51.921396166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-f8f99,Uid:ada341c5-b680-4c1e-935d-ed92a2d1495f,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:15:51.977449 containerd[1664]: time="2025-01-17T12:15:51.976883469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:15:51.977449 containerd[1664]: time="2025-01-17T12:15:51.976921050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:15:51.977449 containerd[1664]: time="2025-01-17T12:15:51.976934928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:51.977449 containerd[1664]: time="2025-01-17T12:15:51.977000897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:51.996498 containerd[1664]: time="2025-01-17T12:15:51.996451020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:15:51.996603 containerd[1664]: time="2025-01-17T12:15:51.996589259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:15:51.996650 containerd[1664]: time="2025-01-17T12:15:51.996639422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:51.996738 containerd[1664]: time="2025-01-17T12:15:51.996722791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:52.006396 containerd[1664]: time="2025-01-17T12:15:52.006361371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj6nn,Uid:7bb697d9-0041-4d64-8efc-8b343cf51e60,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfb5f2bd6fd29763ec10e5e6d2361333f66ba7179f7ee942f1d41c43ea9eb9b7\"" Jan 17 12:15:52.009515 containerd[1664]: time="2025-01-17T12:15:52.009476812Z" level=info msg="CreateContainer within sandbox \"bfb5f2bd6fd29763ec10e5e6d2361333f66ba7179f7ee942f1d41c43ea9eb9b7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:15:52.024644 containerd[1664]: time="2025-01-17T12:15:52.024480160Z" level=info msg="CreateContainer within sandbox \"bfb5f2bd6fd29763ec10e5e6d2361333f66ba7179f7ee942f1d41c43ea9eb9b7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f924bb114d3013ad18adb781bf76984360a64472f6d46ec8a95364336af194ac\"" Jan 17 12:15:52.025376 containerd[1664]: time="2025-01-17T12:15:52.024960704Z" level=info msg="StartContainer for \"f924bb114d3013ad18adb781bf76984360a64472f6d46ec8a95364336af194ac\"" Jan 17 12:15:52.044890 containerd[1664]: time="2025-01-17T12:15:52.044864702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-f8f99,Uid:ada341c5-b680-4c1e-935d-ed92a2d1495f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2e7ad0c087863e26bd93c13b329f8c72e613368abd85ba66cc8aaeadedfe426d\"" Jan 17 12:15:52.046389 containerd[1664]: time="2025-01-17T12:15:52.045917357Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:15:52.064219 containerd[1664]: time="2025-01-17T12:15:52.064164327Z" level=info msg="StartContainer for \"f924bb114d3013ad18adb781bf76984360a64472f6d46ec8a95364336af194ac\" returns successfully" Jan 17 12:15:55.443736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245846504.mount: Deactivated successfully. Jan 17 12:15:55.850607 containerd[1664]: time="2025-01-17T12:15:55.850570999Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:55.852896 containerd[1664]: time="2025-01-17T12:15:55.852868421Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764317" Jan 17 12:15:55.855738 containerd[1664]: time="2025-01-17T12:15:55.855716901Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:55.861346 containerd[1664]: time="2025-01-17T12:15:55.861324787Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:15:55.861950 containerd[1664]: time="2025-01-17T12:15:55.861641912Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.815709365s" Jan 17 12:15:55.861950 containerd[1664]: time="2025-01-17T12:15:55.861662382Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:15:55.862611 containerd[1664]: time="2025-01-17T12:15:55.862593853Z" level=info msg="CreateContainer within sandbox \"2e7ad0c087863e26bd93c13b329f8c72e613368abd85ba66cc8aaeadedfe426d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:15:55.887990 containerd[1664]: time="2025-01-17T12:15:55.887926264Z" level=info msg="CreateContainer within sandbox \"2e7ad0c087863e26bd93c13b329f8c72e613368abd85ba66cc8aaeadedfe426d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9065cacb2850f54e70667bdcf331269ab8e5d20d7dc47602d475b5a7b902565e\"" Jan 17 12:15:55.888345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175557096.mount: Deactivated successfully. Jan 17 12:15:55.889120 containerd[1664]: time="2025-01-17T12:15:55.888589870Z" level=info msg="StartContainer for \"9065cacb2850f54e70667bdcf331269ab8e5d20d7dc47602d475b5a7b902565e\"" Jan 17 12:15:55.945920 containerd[1664]: time="2025-01-17T12:15:55.945757911Z" level=info msg="StartContainer for \"9065cacb2850f54e70667bdcf331269ab8e5d20d7dc47602d475b5a7b902565e\" returns successfully" Jan 17 12:15:56.613938 kubelet[2993]: I0117 12:15:56.613748 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jj6nn" podStartSLOduration=5.613719776 podStartE2EDuration="5.613719776s" podCreationTimestamp="2025-01-17 12:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:15:52.67655522 +0000 UTC m=+16.189663249" watchObservedRunningTime="2025-01-17 12:15:56.613719776 +0000 UTC m=+20.126827798" Jan 17 12:15:56.671672 kubelet[2993]: I0117 12:15:56.671592 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-f8f99" podStartSLOduration=1.85525122 podStartE2EDuration="5.671568576s" podCreationTimestamp="2025-01-17 12:15:51 +0000 UTC" firstStartedPulling="2025-01-17 12:15:52.04554332 +0000 UTC m=+15.558651338" lastFinishedPulling="2025-01-17 12:15:55.861860675 +0000 UTC m=+19.374968694" observedRunningTime="2025-01-17 12:15:56.670951868 +0000 UTC m=+20.184059896" watchObservedRunningTime="2025-01-17 12:15:56.671568576 +0000 UTC m=+20.184676599" Jan 17 12:15:58.781759 kubelet[2993]: I0117 12:15:58.781734 2993 topology_manager.go:215] "Topology Admit Handler" podUID="da49f2e4-de84-4259-a305-4e98774f503b" podNamespace="calico-system" podName="calico-typha-7df6f999bf-h5pt9" Jan 17 12:15:58.831404 kubelet[2993]: I0117 12:15:58.831300 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da49f2e4-de84-4259-a305-4e98774f503b-tigera-ca-bundle\") pod \"calico-typha-7df6f999bf-h5pt9\" (UID: \"da49f2e4-de84-4259-a305-4e98774f503b\") " pod="calico-system/calico-typha-7df6f999bf-h5pt9" Jan 17 12:15:58.831404 kubelet[2993]: I0117 12:15:58.831326 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vm2c\" (UniqueName: \"kubernetes.io/projected/da49f2e4-de84-4259-a305-4e98774f503b-kube-api-access-4vm2c\") pod \"calico-typha-7df6f999bf-h5pt9\" (UID: \"da49f2e4-de84-4259-a305-4e98774f503b\") " pod="calico-system/calico-typha-7df6f999bf-h5pt9" Jan 17 12:15:58.831404 kubelet[2993]: I0117 12:15:58.831360 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/da49f2e4-de84-4259-a305-4e98774f503b-typha-certs\") pod \"calico-typha-7df6f999bf-h5pt9\" (UID: \"da49f2e4-de84-4259-a305-4e98774f503b\") " pod="calico-system/calico-typha-7df6f999bf-h5pt9" Jan 17 12:15:58.884935 kubelet[2993]: I0117 12:15:58.884911 2993 topology_manager.go:215] "Topology Admit Handler" podUID="af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0" podNamespace="calico-system" podName="calico-node-p4d85" Jan 17 12:15:58.932349 kubelet[2993]: I0117 12:15:58.932314 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-flexvol-driver-host\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.933080 kubelet[2993]: I0117 12:15:58.932489 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmvpr\" (UniqueName: \"kubernetes.io/projected/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-kube-api-access-mmvpr\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.933080 kubelet[2993]: I0117 12:15:58.932522 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-policysync\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.933395 kubelet[2993]: I0117 12:15:58.933387 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-var-run-calico\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.933524 kubelet[2993]: I0117 12:15:58.933516 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-cni-bin-dir\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.933675 kubelet[2993]: I0117 12:15:58.933667 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-tigera-ca-bundle\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.933900 kubelet[2993]: I0117 12:15:58.933735 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-cni-net-dir\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.936168 kubelet[2993]: I0117 12:15:58.936153 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-cni-log-dir\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.936267 kubelet[2993]: I0117 12:15:58.936259 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-xtables-lock\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.936316 kubelet[2993]: I0117 12:15:58.936311 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-node-certs\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.936369 kubelet[2993]: I0117 12:15:58.936364 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-lib-modules\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:58.936443 kubelet[2993]: I0117 12:15:58.936437 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0-var-lib-calico\") pod \"calico-node-p4d85\" (UID: \"af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0\") " pod="calico-system/calico-node-p4d85" Jan 17 12:15:59.056850 kubelet[2993]: E0117 12:15:59.056019 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.056850 kubelet[2993]: W0117 12:15:59.056480 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.056850 kubelet[2993]: E0117 12:15:59.056515 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.059177 kubelet[2993]: E0117 12:15:59.059159 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.059219 kubelet[2993]: W0117 12:15:59.059177 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.059219 kubelet[2993]: E0117 12:15:59.059197 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.068152 kubelet[2993]: I0117 12:15:59.068121 2993 topology_manager.go:215] "Topology Admit Handler" podUID="280d2533-efae-41ef-91aa-7697f939417d" podNamespace="calico-system" podName="csi-node-driver-5vq7j" Jan 17 12:15:59.068514 kubelet[2993]: E0117 12:15:59.068321 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vq7j" podUID="280d2533-efae-41ef-91aa-7697f939417d" Jan 17 12:15:59.086114 containerd[1664]: time="2025-01-17T12:15:59.086085442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df6f999bf-h5pt9,Uid:da49f2e4-de84-4259-a305-4e98774f503b,Namespace:calico-system,Attempt:0,}" Jan 17 12:15:59.135405 kubelet[2993]: E0117 12:15:59.134898 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.135405 kubelet[2993]: W0117 12:15:59.134915 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.135405 kubelet[2993]: E0117 12:15:59.134931 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.136221 kubelet[2993]: E0117 12:15:59.136117 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.136221 kubelet[2993]: W0117 12:15:59.136126 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.136221 kubelet[2993]: E0117 12:15:59.136137 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.136495 kubelet[2993]: E0117 12:15:59.136232 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.136495 kubelet[2993]: W0117 12:15:59.136236 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.136495 kubelet[2993]: E0117 12:15:59.136243 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.136679 kubelet[2993]: E0117 12:15:59.136516 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.136679 kubelet[2993]: W0117 12:15:59.136524 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.136679 kubelet[2993]: E0117 12:15:59.136531 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.137628 kubelet[2993]: E0117 12:15:59.137253 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.137628 kubelet[2993]: W0117 12:15:59.137260 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.137628 kubelet[2993]: E0117 12:15:59.137269 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.137824 kubelet[2993]: E0117 12:15:59.137815 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.137824 kubelet[2993]: W0117 12:15:59.137822 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.137895 kubelet[2993]: E0117 12:15:59.137829 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.138805 kubelet[2993]: E0117 12:15:59.138745 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.138805 kubelet[2993]: W0117 12:15:59.138753 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.138805 kubelet[2993]: E0117 12:15:59.138762 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.138985 kubelet[2993]: E0117 12:15:59.138933 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.138985 kubelet[2993]: W0117 12:15:59.138938 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.138985 kubelet[2993]: E0117 12:15:59.138945 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.139592 kubelet[2993]: E0117 12:15:59.139579 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.139592 kubelet[2993]: W0117 12:15:59.139587 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.139730 kubelet[2993]: E0117 12:15:59.139597 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.140125 kubelet[2993]: E0117 12:15:59.140111 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.140125 kubelet[2993]: W0117 12:15:59.140120 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.140272 kubelet[2993]: E0117 12:15:59.140128 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.140736 kubelet[2993]: E0117 12:15:59.140654 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.140736 kubelet[2993]: W0117 12:15:59.140661 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.140736 kubelet[2993]: E0117 12:15:59.140673 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.141036 containerd[1664]: time="2025-01-17T12:15:59.140951489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:15:59.141180 kubelet[2993]: E0117 12:15:59.141031 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.141180 kubelet[2993]: W0117 12:15:59.141037 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.141180 kubelet[2993]: E0117 12:15:59.141044 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.141349 kubelet[2993]: E0117 12:15:59.141275 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.141349 kubelet[2993]: W0117 12:15:59.141281 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.141349 kubelet[2993]: E0117 12:15:59.141288 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.141511 containerd[1664]: time="2025-01-17T12:15:59.141001996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:15:59.141511 containerd[1664]: time="2025-01-17T12:15:59.141266406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:59.141650 kubelet[2993]: E0117 12:15:59.141401 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.141650 kubelet[2993]: W0117 12:15:59.141406 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.141650 kubelet[2993]: E0117 12:15:59.141413 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.141709 kubelet[2993]: E0117 12:15:59.141656 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.141709 kubelet[2993]: W0117 12:15:59.141661 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.141709 kubelet[2993]: E0117 12:15:59.141667 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.141766 kubelet[2993]: E0117 12:15:59.141755 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.141766 kubelet[2993]: W0117 12:15:59.141759 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.141766 kubelet[2993]: E0117 12:15:59.141765 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.142165 kubelet[2993]: E0117 12:15:59.141868 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.142165 kubelet[2993]: W0117 12:15:59.141873 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.142165 kubelet[2993]: E0117 12:15:59.141879 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.142165 kubelet[2993]: E0117 12:15:59.141967 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.142165 kubelet[2993]: W0117 12:15:59.141971 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.142165 kubelet[2993]: E0117 12:15:59.141976 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.142165 kubelet[2993]: E0117 12:15:59.142052 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.142165 kubelet[2993]: W0117 12:15:59.142058 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.142165 kubelet[2993]: E0117 12:15:59.142064 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.142165 kubelet[2993]: E0117 12:15:59.142156 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.142641 kubelet[2993]: W0117 12:15:59.142165 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.142641 kubelet[2993]: E0117 12:15:59.142173 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.142851 containerd[1664]: time="2025-01-17T12:15:59.142000457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:59.142881 kubelet[2993]: E0117 12:15:59.142673 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.142881 kubelet[2993]: W0117 12:15:59.142680 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.142881 kubelet[2993]: E0117 12:15:59.142690 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.142881 kubelet[2993]: I0117 12:15:59.142710 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/280d2533-efae-41ef-91aa-7697f939417d-socket-dir\") pod \"csi-node-driver-5vq7j\" (UID: \"280d2533-efae-41ef-91aa-7697f939417d\") " pod="calico-system/csi-node-driver-5vq7j" Jan 17 12:15:59.143337 kubelet[2993]: E0117 12:15:59.143090 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.143337 kubelet[2993]: W0117 12:15:59.143097 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.143337 kubelet[2993]: E0117 12:15:59.143120 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.143337 kubelet[2993]: I0117 12:15:59.143132 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5jrz\" (UniqueName: \"kubernetes.io/projected/280d2533-efae-41ef-91aa-7697f939417d-kube-api-access-v5jrz\") pod \"csi-node-driver-5vq7j\" (UID: \"280d2533-efae-41ef-91aa-7697f939417d\") " pod="calico-system/csi-node-driver-5vq7j" Jan 17 12:15:59.154902 kubelet[2993]: E0117 12:15:59.143451 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.154902 kubelet[2993]: W0117 12:15:59.143459 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.154902 kubelet[2993]: E0117 12:15:59.143468 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.154902 kubelet[2993]: I0117 12:15:59.143479 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/280d2533-efae-41ef-91aa-7697f939417d-varrun\") pod \"csi-node-driver-5vq7j\" (UID: \"280d2533-efae-41ef-91aa-7697f939417d\") " pod="calico-system/csi-node-driver-5vq7j" Jan 17 12:15:59.154902 kubelet[2993]: E0117 12:15:59.143913 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.154902 kubelet[2993]: W0117 12:15:59.143919 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.154902 kubelet[2993]: E0117 12:15:59.143929 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.154902 kubelet[2993]: I0117 12:15:59.143941 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/280d2533-efae-41ef-91aa-7697f939417d-kubelet-dir\") pod \"csi-node-driver-5vq7j\" (UID: \"280d2533-efae-41ef-91aa-7697f939417d\") " pod="calico-system/csi-node-driver-5vq7j" Jan 17 12:15:59.154902 kubelet[2993]: E0117 12:15:59.144047 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.155162 kubelet[2993]: W0117 12:15:59.144052 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.155162 kubelet[2993]: E0117 12:15:59.144059 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.155162 kubelet[2993]: I0117 12:15:59.144097 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/280d2533-efae-41ef-91aa-7697f939417d-registration-dir\") pod \"csi-node-driver-5vq7j\" (UID: \"280d2533-efae-41ef-91aa-7697f939417d\") " pod="calico-system/csi-node-driver-5vq7j" Jan 17 12:15:59.155162 kubelet[2993]: E0117 12:15:59.144212 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.155162 kubelet[2993]: W0117 12:15:59.144218 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.155162 kubelet[2993]: E0117 12:15:59.144272 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.155162 kubelet[2993]: E0117 12:15:59.144348 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.155162 kubelet[2993]: W0117 12:15:59.144353 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.155162 kubelet[2993]: E0117 12:15:59.144409 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.156631 kubelet[2993]: E0117 12:15:59.144462 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.156631 kubelet[2993]: W0117 12:15:59.144466 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.156631 kubelet[2993]: E0117 12:15:59.144484 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.156631 kubelet[2993]: E0117 12:15:59.144574 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.156631 kubelet[2993]: W0117 12:15:59.144579 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.156631 kubelet[2993]: E0117 12:15:59.144587 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.156631 kubelet[2993]: E0117 12:15:59.144779 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.156631 kubelet[2993]: W0117 12:15:59.144799 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.156631 kubelet[2993]: E0117 12:15:59.144809 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.156631 kubelet[2993]: E0117 12:15:59.144904 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.156827 kubelet[2993]: W0117 12:15:59.144909 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.156827 kubelet[2993]: E0117 12:15:59.144917 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.156827 kubelet[2993]: E0117 12:15:59.145009 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.156827 kubelet[2993]: W0117 12:15:59.145014 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.156827 kubelet[2993]: E0117 12:15:59.145029 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.156827 kubelet[2993]: E0117 12:15:59.145125 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.156827 kubelet[2993]: W0117 12:15:59.145129 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.156827 kubelet[2993]: E0117 12:15:59.145135 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.156827 kubelet[2993]: E0117 12:15:59.145229 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.156827 kubelet[2993]: W0117 12:15:59.145233 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.156974 kubelet[2993]: E0117 12:15:59.145238 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.156974 kubelet[2993]: E0117 12:15:59.145370 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.156974 kubelet[2993]: W0117 12:15:59.145374 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.156974 kubelet[2993]: E0117 12:15:59.145381 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.193176 containerd[1664]: time="2025-01-17T12:15:59.193148362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4d85,Uid:af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0,Namespace:calico-system,Attempt:0,}" Jan 17 12:15:59.224170 containerd[1664]: time="2025-01-17T12:15:59.224057784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:15:59.224170 containerd[1664]: time="2025-01-17T12:15:59.224107223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:15:59.224170 containerd[1664]: time="2025-01-17T12:15:59.224125762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:59.225307 containerd[1664]: time="2025-01-17T12:15:59.225192723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:59.231188 containerd[1664]: time="2025-01-17T12:15:59.231064502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df6f999bf-h5pt9,Uid:da49f2e4-de84-4259-a305-4e98774f503b,Namespace:calico-system,Attempt:0,} returns sandbox id \"9710df95517f4bfa7cd56d7b245a0bba4296f221f6b9ab1d5f24161b9ba6f701\"" Jan 17 12:15:59.241646 containerd[1664]: time="2025-01-17T12:15:59.241620700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:15:59.245193 kubelet[2993]: E0117 12:15:59.245156 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.245193 kubelet[2993]: W0117 12:15:59.245170 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.245193 kubelet[2993]: E0117 12:15:59.245185 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.247980 kubelet[2993]: E0117 12:15:59.247968 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.247980 kubelet[2993]: W0117 12:15:59.247978 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.248055 kubelet[2993]: E0117 12:15:59.247993 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.248164 kubelet[2993]: E0117 12:15:59.248154 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.248194 kubelet[2993]: W0117 12:15:59.248165 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.248194 kubelet[2993]: E0117 12:15:59.248177 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.248308 kubelet[2993]: E0117 12:15:59.248305 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.248328 kubelet[2993]: W0117 12:15:59.248310 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.248328 kubelet[2993]: E0117 12:15:59.248318 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.248541 kubelet[2993]: E0117 12:15:59.248532 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.248567 kubelet[2993]: W0117 12:15:59.248539 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.248567 kubelet[2993]: E0117 12:15:59.248556 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.248685 kubelet[2993]: E0117 12:15:59.248677 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.248685 kubelet[2993]: W0117 12:15:59.248684 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.248808 kubelet[2993]: E0117 12:15:59.248797 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.248932 kubelet[2993]: E0117 12:15:59.248920 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.248932 kubelet[2993]: W0117 12:15:59.248928 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.248987 kubelet[2993]: E0117 12:15:59.248937 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.249104 kubelet[2993]: E0117 12:15:59.249093 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.249104 kubelet[2993]: W0117 12:15:59.249102 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.249227 kubelet[2993]: E0117 12:15:59.249153 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.249346 kubelet[2993]: E0117 12:15:59.249333 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.249346 kubelet[2993]: W0117 12:15:59.249342 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.249483 kubelet[2993]: E0117 12:15:59.249443 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.249594 kubelet[2993]: E0117 12:15:59.249584 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.249594 kubelet[2993]: W0117 12:15:59.249593 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.249638 kubelet[2993]: E0117 12:15:59.249607 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.249750 kubelet[2993]: E0117 12:15:59.249741 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.249750 kubelet[2993]: W0117 12:15:59.249748 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.249844 kubelet[2993]: E0117 12:15:59.249832 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.250104 kubelet[2993]: E0117 12:15:59.250086 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.250104 kubelet[2993]: W0117 12:15:59.250100 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.250172 kubelet[2993]: E0117 12:15:59.250118 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.250281 kubelet[2993]: E0117 12:15:59.250268 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.250281 kubelet[2993]: W0117 12:15:59.250277 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.250379 kubelet[2993]: E0117 12:15:59.250334 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.250407 kubelet[2993]: E0117 12:15:59.250402 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.250425 kubelet[2993]: W0117 12:15:59.250407 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.250425 kubelet[2993]: E0117 12:15:59.250419 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.250530 kubelet[2993]: E0117 12:15:59.250517 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.250530 kubelet[2993]: W0117 12:15:59.250527 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.250635 kubelet[2993]: E0117 12:15:59.250537 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.250655 kubelet[2993]: E0117 12:15:59.250637 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.250655 kubelet[2993]: W0117 12:15:59.250642 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.250655 kubelet[2993]: E0117 12:15:59.250648 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.250974 kubelet[2993]: E0117 12:15:59.250964 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.250974 kubelet[2993]: W0117 12:15:59.250972 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.251633 kubelet[2993]: E0117 12:15:59.251118 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.251633 kubelet[2993]: W0117 12:15:59.251125 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.251633 kubelet[2993]: E0117 12:15:59.251180 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.251633 kubelet[2993]: E0117 12:15:59.251317 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.252852 kubelet[2993]: E0117 12:15:59.252837 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.252852 kubelet[2993]: W0117 12:15:59.252851 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.252952 kubelet[2993]: E0117 12:15:59.252876 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.255201 kubelet[2993]: E0117 12:15:59.254745 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.255201 kubelet[2993]: W0117 12:15:59.254755 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.255201 kubelet[2993]: E0117 12:15:59.255140 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.255201 kubelet[2993]: W0117 12:15:59.255146 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.255828 kubelet[2993]: E0117 12:15:59.255591 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.255828 kubelet[2993]: W0117 12:15:59.255597 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.255828 kubelet[2993]: E0117 12:15:59.255606 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.256669 kubelet[2993]: E0117 12:15:59.256355 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.256669 kubelet[2993]: W0117 12:15:59.256364 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.256669 kubelet[2993]: E0117 12:15:59.256374 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.256669 kubelet[2993]: E0117 12:15:59.256393 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.257365 kubelet[2993]: E0117 12:15:59.256776 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.257365 kubelet[2993]: W0117 12:15:59.256782 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.257365 kubelet[2993]: E0117 12:15:59.256838 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.258080 kubelet[2993]: E0117 12:15:59.257998 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.258080 kubelet[2993]: W0117 12:15:59.258007 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.258080 kubelet[2993]: E0117 12:15:59.258019 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.258080 kubelet[2993]: E0117 12:15:59.258039 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:15:59.262882 containerd[1664]: time="2025-01-17T12:15:59.262512460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4d85,Uid:af1e99a5-7d95-47a1-ad4e-bd7de4b4a5c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"95ebc0f01e4a04dd7831a8a36c8493ec3df6462eef10ff75898b3fb3c15551eb\"" Jan 17 12:15:59.264085 kubelet[2993]: E0117 12:15:59.264069 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:15:59.264085 kubelet[2993]: W0117 12:15:59.264080 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:15:59.264167 kubelet[2993]: E0117 12:15:59.264091 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:00.610827 kubelet[2993]: E0117 12:16:00.610798 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vq7j" podUID="280d2533-efae-41ef-91aa-7697f939417d" Jan 17 12:16:00.932467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971469581.mount: Deactivated successfully. Jan 17 12:16:02.273553 containerd[1664]: time="2025-01-17T12:16:02.273517860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:02.274096 containerd[1664]: time="2025-01-17T12:16:02.274002031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:16:02.274726 containerd[1664]: time="2025-01-17T12:16:02.274361548Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:02.275364 containerd[1664]: time="2025-01-17T12:16:02.275343155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:02.276005 containerd[1664]: time="2025-01-17T12:16:02.275742167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.034095837s" Jan 17 12:16:02.276005 containerd[1664]: time="2025-01-17T12:16:02.275760232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:16:02.282869 containerd[1664]: time="2025-01-17T12:16:02.281572829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:16:02.290045 containerd[1664]: time="2025-01-17T12:16:02.290024269Z" level=info msg="CreateContainer within sandbox \"9710df95517f4bfa7cd56d7b245a0bba4296f221f6b9ab1d5f24161b9ba6f701\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:16:02.301408 containerd[1664]: time="2025-01-17T12:16:02.301379424Z" level=info msg="CreateContainer within sandbox \"9710df95517f4bfa7cd56d7b245a0bba4296f221f6b9ab1d5f24161b9ba6f701\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bfd2ee3a589f41a6f792a3fc8a2bb93e6e15da9ddd6bdb5f571755d5c6677022\"" Jan 17 12:16:02.301698 containerd[1664]: time="2025-01-17T12:16:02.301685052Z" level=info msg="StartContainer for \"bfd2ee3a589f41a6f792a3fc8a2bb93e6e15da9ddd6bdb5f571755d5c6677022\"" Jan 17 12:16:02.363085 containerd[1664]: time="2025-01-17T12:16:02.363058407Z" level=info msg="StartContainer for \"bfd2ee3a589f41a6f792a3fc8a2bb93e6e15da9ddd6bdb5f571755d5c6677022\" returns successfully" Jan 17 12:16:02.605160 kubelet[2993]: E0117 12:16:02.604887 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vq7j" podUID="280d2533-efae-41ef-91aa-7697f939417d" Jan 17 12:16:02.720599 kubelet[2993]: I0117 12:16:02.720552 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7df6f999bf-h5pt9" podStartSLOduration=1.6796662329999998 podStartE2EDuration="4.720526958s" podCreationTimestamp="2025-01-17 12:15:58 +0000 UTC" firstStartedPulling="2025-01-17 12:15:59.235491599 +0000 UTC m=+22.748599618" lastFinishedPulling="2025-01-17 12:16:02.276352324 +0000 UTC m=+25.789460343" observedRunningTime="2025-01-17 12:16:02.71865075 +0000 UTC m=+26.231758780" watchObservedRunningTime="2025-01-17 12:16:02.720526958 +0000 UTC m=+26.233634981" Jan 17 12:16:02.793386 kubelet[2993]: E0117 12:16:02.793365 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.793577 kubelet[2993]: W0117 12:16:02.793497 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.793577 kubelet[2993]: E0117 12:16:02.793518 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.793815 kubelet[2993]: E0117 12:16:02.793730 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.793815 kubelet[2993]: W0117 12:16:02.793739 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.793815 kubelet[2993]: E0117 12:16:02.793748 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.794031 kubelet[2993]: E0117 12:16:02.793961 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.794031 kubelet[2993]: W0117 12:16:02.793969 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.794031 kubelet[2993]: E0117 12:16:02.793977 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.794263 kubelet[2993]: E0117 12:16:02.794188 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.794263 kubelet[2993]: W0117 12:16:02.794196 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.794263 kubelet[2993]: E0117 12:16:02.794206 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.794388 kubelet[2993]: E0117 12:16:02.794380 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.794437 kubelet[2993]: W0117 12:16:02.794430 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.794528 kubelet[2993]: E0117 12:16:02.794475 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.794672 kubelet[2993]: E0117 12:16:02.794607 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.794672 kubelet[2993]: W0117 12:16:02.794616 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.794672 kubelet[2993]: E0117 12:16:02.794624 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.794964 kubelet[2993]: E0117 12:16:02.794877 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.794964 kubelet[2993]: W0117 12:16:02.794886 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.794964 kubelet[2993]: E0117 12:16:02.794894 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.795093 kubelet[2993]: E0117 12:16:02.795085 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.795207 kubelet[2993]: W0117 12:16:02.795132 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.795207 kubelet[2993]: E0117 12:16:02.795144 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.795298 kubelet[2993]: E0117 12:16:02.795291 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.795345 kubelet[2993]: W0117 12:16:02.795338 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.795388 kubelet[2993]: E0117 12:16:02.795382 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.795613 kubelet[2993]: E0117 12:16:02.795546 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.795613 kubelet[2993]: W0117 12:16:02.795554 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.795613 kubelet[2993]: E0117 12:16:02.795562 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.795731 kubelet[2993]: E0117 12:16:02.795724 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.795810 kubelet[2993]: W0117 12:16:02.795783 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.795914 kubelet[2993]: E0117 12:16:02.795858 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.796036 kubelet[2993]: E0117 12:16:02.795991 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.796036 kubelet[2993]: W0117 12:16:02.795999 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.796036 kubelet[2993]: E0117 12:16:02.796008 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.796312 kubelet[2993]: E0117 12:16:02.796245 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.796312 kubelet[2993]: W0117 12:16:02.796253 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.796312 kubelet[2993]: E0117 12:16:02.796261 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.796575 kubelet[2993]: E0117 12:16:02.796503 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.796575 kubelet[2993]: W0117 12:16:02.796511 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.796575 kubelet[2993]: E0117 12:16:02.796521 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.796695 kubelet[2993]: E0117 12:16:02.796688 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.796743 kubelet[2993]: W0117 12:16:02.796736 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.796836 kubelet[2993]: E0117 12:16:02.796782 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.872199 kubelet[2993]: E0117 12:16:02.872040 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.872199 kubelet[2993]: W0117 12:16:02.872095 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.872199 kubelet[2993]: E0117 12:16:02.872111 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.872511 kubelet[2993]: E0117 12:16:02.872464 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.872511 kubelet[2993]: W0117 12:16:02.872471 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.872511 kubelet[2993]: E0117 12:16:02.872484 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.873068 kubelet[2993]: E0117 12:16:02.872609 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.873068 kubelet[2993]: W0117 12:16:02.872618 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.873068 kubelet[2993]: E0117 12:16:02.872631 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.873068 kubelet[2993]: E0117 12:16:02.872896 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.873068 kubelet[2993]: W0117 12:16:02.872901 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.873068 kubelet[2993]: E0117 12:16:02.872914 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.873850 kubelet[2993]: E0117 12:16:02.873543 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.873850 kubelet[2993]: W0117 12:16:02.873550 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.873850 kubelet[2993]: E0117 12:16:02.873561 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.873850 kubelet[2993]: E0117 12:16:02.873732 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.873850 kubelet[2993]: W0117 12:16:02.873738 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.873850 kubelet[2993]: E0117 12:16:02.873766 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.874132 kubelet[2993]: E0117 12:16:02.874002 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.874132 kubelet[2993]: W0117 12:16:02.874008 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.874132 kubelet[2993]: E0117 12:16:02.874064 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.874375 kubelet[2993]: E0117 12:16:02.874250 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.874375 kubelet[2993]: W0117 12:16:02.874256 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.874375 kubelet[2993]: E0117 12:16:02.874316 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.874567 kubelet[2993]: E0117 12:16:02.874496 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.874567 kubelet[2993]: W0117 12:16:02.874502 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.874567 kubelet[2993]: E0117 12:16:02.874512 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.874782 kubelet[2993]: E0117 12:16:02.874750 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.874782 kubelet[2993]: W0117 12:16:02.874756 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.874782 kubelet[2993]: E0117 12:16:02.874765 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.875024 kubelet[2993]: E0117 12:16:02.875015 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.875055 kubelet[2993]: W0117 12:16:02.875025 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.875055 kubelet[2993]: E0117 12:16:02.875039 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.875291 kubelet[2993]: E0117 12:16:02.875282 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.875291 kubelet[2993]: W0117 12:16:02.875289 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.875373 kubelet[2993]: E0117 12:16:02.875298 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.875525 kubelet[2993]: E0117 12:16:02.875516 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.875525 kubelet[2993]: W0117 12:16:02.875522 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.875638 kubelet[2993]: E0117 12:16:02.875532 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.875760 kubelet[2993]: E0117 12:16:02.875692 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.875760 kubelet[2993]: W0117 12:16:02.875700 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.875760 kubelet[2993]: E0117 12:16:02.875717 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.876011 kubelet[2993]: E0117 12:16:02.875959 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.876011 kubelet[2993]: W0117 12:16:02.875967 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.876011 kubelet[2993]: E0117 12:16:02.875979 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.876231 kubelet[2993]: E0117 12:16:02.876184 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.876231 kubelet[2993]: W0117 12:16:02.876193 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.876231 kubelet[2993]: E0117 12:16:02.876206 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.876447 kubelet[2993]: E0117 12:16:02.876291 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.876447 kubelet[2993]: W0117 12:16:02.876296 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.876447 kubelet[2993]: E0117 12:16:02.876304 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:02.876554 kubelet[2993]: E0117 12:16:02.876547 2993 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:16:02.876592 kubelet[2993]: W0117 12:16:02.876586 2993 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:16:02.876628 kubelet[2993]: E0117 12:16:02.876623 2993 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:16:03.558520 containerd[1664]: time="2025-01-17T12:16:03.558482930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:03.559076 containerd[1664]: time="2025-01-17T12:16:03.559016053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:16:03.559401 containerd[1664]: time="2025-01-17T12:16:03.559278434Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:03.561354 containerd[1664]: time="2025-01-17T12:16:03.561323964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:03.563534 containerd[1664]: time="2025-01-17T12:16:03.563504634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.281902426s" Jan 17 12:16:03.564864 containerd[1664]: time="2025-01-17T12:16:03.563532559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:16:03.575942 containerd[1664]: time="2025-01-17T12:16:03.575857114Z" level=info msg="CreateContainer within sandbox \"95ebc0f01e4a04dd7831a8a36c8493ec3df6462eef10ff75898b3fb3c15551eb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:16:03.589241 containerd[1664]: time="2025-01-17T12:16:03.589209954Z" level=info msg="CreateContainer within sandbox \"95ebc0f01e4a04dd7831a8a36c8493ec3df6462eef10ff75898b3fb3c15551eb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4443c0421905f17b28ed200610cfb60d0ffa7a53b5908ee7a23d21219ddb4671\"" Jan 17 12:16:03.589890 containerd[1664]: time="2025-01-17T12:16:03.589668155Z" level=info msg="StartContainer for \"4443c0421905f17b28ed200610cfb60d0ffa7a53b5908ee7a23d21219ddb4671\"" Jan 17 12:16:03.689208 containerd[1664]: time="2025-01-17T12:16:03.689177167Z" level=info msg="StartContainer for \"4443c0421905f17b28ed200610cfb60d0ffa7a53b5908ee7a23d21219ddb4671\" returns successfully" Jan 17 12:16:03.702304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4443c0421905f17b28ed200610cfb60d0ffa7a53b5908ee7a23d21219ddb4671-rootfs.mount: Deactivated successfully. Jan 17 12:16:03.707366 kubelet[2993]: I0117 12:16:03.707348 2993 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:16:03.989241 containerd[1664]: time="2025-01-17T12:16:03.984948732Z" level=info msg="shim disconnected" id=4443c0421905f17b28ed200610cfb60d0ffa7a53b5908ee7a23d21219ddb4671 namespace=k8s.io Jan 17 12:16:03.989241 containerd[1664]: time="2025-01-17T12:16:03.989163381Z" level=warning msg="cleaning up after shim disconnected" id=4443c0421905f17b28ed200610cfb60d0ffa7a53b5908ee7a23d21219ddb4671 namespace=k8s.io Jan 17 12:16:03.989241 containerd[1664]: time="2025-01-17T12:16:03.989174328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:16:04.606261 kubelet[2993]: E0117 12:16:04.605937 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vq7j" podUID="280d2533-efae-41ef-91aa-7697f939417d" Jan 17 12:16:04.710944 containerd[1664]: time="2025-01-17T12:16:04.710526584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:16:06.613496 kubelet[2993]: E0117 12:16:06.613428 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vq7j" podUID="280d2533-efae-41ef-91aa-7697f939417d" Jan 17 12:16:07.985052 containerd[1664]: time="2025-01-17T12:16:07.984590301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:07.985692 containerd[1664]: time="2025-01-17T12:16:07.985673234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:16:07.986175 containerd[1664]: time="2025-01-17T12:16:07.986162397Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:07.988968 containerd[1664]: time="2025-01-17T12:16:07.988947507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:07.990067 containerd[1664]: time="2025-01-17T12:16:07.989209373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.278629715s" Jan 17 12:16:07.990067 containerd[1664]: time="2025-01-17T12:16:07.989227332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:16:07.991627 containerd[1664]: time="2025-01-17T12:16:07.991604343Z" level=info msg="CreateContainer within sandbox \"95ebc0f01e4a04dd7831a8a36c8493ec3df6462eef10ff75898b3fb3c15551eb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:16:08.007086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662465278.mount: Deactivated successfully. Jan 17 12:16:08.009973 containerd[1664]: time="2025-01-17T12:16:08.009905290Z" level=info msg="CreateContainer within sandbox \"95ebc0f01e4a04dd7831a8a36c8493ec3df6462eef10ff75898b3fb3c15551eb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5e19aa3fe0d0794c22fbd084612aa04ec9f2350682f0af76926f757d128d268e\"" Jan 17 12:16:08.010678 containerd[1664]: time="2025-01-17T12:16:08.010390537Z" level=info msg="StartContainer for \"5e19aa3fe0d0794c22fbd084612aa04ec9f2350682f0af76926f757d128d268e\"" Jan 17 12:16:08.062495 containerd[1664]: time="2025-01-17T12:16:08.062459732Z" level=info msg="StartContainer for \"5e19aa3fe0d0794c22fbd084612aa04ec9f2350682f0af76926f757d128d268e\" returns successfully" Jan 17 12:16:08.605512 kubelet[2993]: E0117 12:16:08.605305 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vq7j" podUID="280d2533-efae-41ef-91aa-7697f939417d" Jan 17 12:16:09.904209 kubelet[2993]: I0117 12:16:09.904035 2993 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:16:09.915593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e19aa3fe0d0794c22fbd084612aa04ec9f2350682f0af76926f757d128d268e-rootfs.mount: Deactivated successfully. Jan 17 12:16:09.920684 containerd[1664]: time="2025-01-17T12:16:09.920563646Z" level=info msg="shim disconnected" id=5e19aa3fe0d0794c22fbd084612aa04ec9f2350682f0af76926f757d128d268e namespace=k8s.io Jan 17 12:16:09.921044 containerd[1664]: time="2025-01-17T12:16:09.920676743Z" level=warning msg="cleaning up after shim disconnected" id=5e19aa3fe0d0794c22fbd084612aa04ec9f2350682f0af76926f757d128d268e namespace=k8s.io Jan 17 12:16:09.921044 containerd[1664]: time="2025-01-17T12:16:09.920698383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:16:09.929385 containerd[1664]: time="2025-01-17T12:16:09.929350579Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:16:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:16:09.941114 kubelet[2993]: I0117 12:16:09.941085 2993 topology_manager.go:215] "Topology Admit Handler" podUID="f48483d2-723e-4d57-bb35-8d26869637fa" podNamespace="kube-system" podName="coredns-76f75df574-f6xfs" Jan 17 12:16:09.983197 kubelet[2993]: I0117 12:16:09.983156 2993 topology_manager.go:215] "Topology Admit Handler" podUID="a1e52c8a-911e-4407-972d-34d4d2855eaf" podNamespace="kube-system" podName="coredns-76f75df574-d76r9" Jan 17 12:16:09.983319 kubelet[2993]: I0117 12:16:09.983289 2993 topology_manager.go:215] "Topology Admit Handler" podUID="55894bf0-6b41-4650-8279-8e544ed121c2" podNamespace="calico-system" podName="calico-kube-controllers-664db8f47c-wdfrk" Jan 17 12:16:09.983459 kubelet[2993]: I0117 12:16:09.983368 2993 topology_manager.go:215] "Topology Admit Handler" podUID="1f0e3b7e-83db-40ec-a4a9-0422da13ad74" podNamespace="calico-apiserver" podName="calico-apiserver-677457d5b4-vpvf6" Jan 17 12:16:09.992714 kubelet[2993]: I0117 12:16:09.992657 2993 topology_manager.go:215] "Topology Admit Handler" podUID="df8c74b1-88c2-467d-afc5-74596a44f7fb" podNamespace="calico-apiserver" podName="calico-apiserver-677457d5b4-6qjzb" Jan 17 12:16:10.169462 kubelet[2993]: I0117 12:16:10.169379 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptcf4\" (UniqueName: \"kubernetes.io/projected/f48483d2-723e-4d57-bb35-8d26869637fa-kube-api-access-ptcf4\") pod \"coredns-76f75df574-f6xfs\" (UID: \"f48483d2-723e-4d57-bb35-8d26869637fa\") " pod="kube-system/coredns-76f75df574-f6xfs" Jan 17 12:16:10.174496 kubelet[2993]: I0117 12:16:10.174274 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1e52c8a-911e-4407-972d-34d4d2855eaf-config-volume\") pod \"coredns-76f75df574-d76r9\" (UID: \"a1e52c8a-911e-4407-972d-34d4d2855eaf\") " pod="kube-system/coredns-76f75df574-d76r9" Jan 17 12:16:10.174496 kubelet[2993]: I0117 12:16:10.174333 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqtgk\" (UniqueName: \"kubernetes.io/projected/a1e52c8a-911e-4407-972d-34d4d2855eaf-kube-api-access-kqtgk\") pod \"coredns-76f75df574-d76r9\" (UID: \"a1e52c8a-911e-4407-972d-34d4d2855eaf\") " pod="kube-system/coredns-76f75df574-d76r9" Jan 17 12:16:10.174496 kubelet[2993]: I0117 12:16:10.174351 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5ln8\" (UniqueName: \"kubernetes.io/projected/df8c74b1-88c2-467d-afc5-74596a44f7fb-kube-api-access-r5ln8\") pod \"calico-apiserver-677457d5b4-6qjzb\" (UID: \"df8c74b1-88c2-467d-afc5-74596a44f7fb\") " pod="calico-apiserver/calico-apiserver-677457d5b4-6qjzb" Jan 17 12:16:10.174496 kubelet[2993]: I0117 12:16:10.174366 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f48483d2-723e-4d57-bb35-8d26869637fa-config-volume\") pod \"coredns-76f75df574-f6xfs\" (UID: \"f48483d2-723e-4d57-bb35-8d26869637fa\") " pod="kube-system/coredns-76f75df574-f6xfs" Jan 17 12:16:10.174496 kubelet[2993]: I0117 12:16:10.174379 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1f0e3b7e-83db-40ec-a4a9-0422da13ad74-calico-apiserver-certs\") pod \"calico-apiserver-677457d5b4-vpvf6\" (UID: \"1f0e3b7e-83db-40ec-a4a9-0422da13ad74\") " pod="calico-apiserver/calico-apiserver-677457d5b4-vpvf6" Jan 17 12:16:10.175178 kubelet[2993]: I0117 12:16:10.174395 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2hl\" (UniqueName: \"kubernetes.io/projected/55894bf0-6b41-4650-8279-8e544ed121c2-kube-api-access-9s2hl\") pod \"calico-kube-controllers-664db8f47c-wdfrk\" (UID: \"55894bf0-6b41-4650-8279-8e544ed121c2\") " pod="calico-system/calico-kube-controllers-664db8f47c-wdfrk" Jan 17 12:16:10.175178 kubelet[2993]: I0117 12:16:10.174407 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/df8c74b1-88c2-467d-afc5-74596a44f7fb-calico-apiserver-certs\") pod \"calico-apiserver-677457d5b4-6qjzb\" (UID: \"df8c74b1-88c2-467d-afc5-74596a44f7fb\") " pod="calico-apiserver/calico-apiserver-677457d5b4-6qjzb" Jan 17 12:16:10.175178 kubelet[2993]: I0117 12:16:10.174423 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh88h\" (UniqueName: \"kubernetes.io/projected/1f0e3b7e-83db-40ec-a4a9-0422da13ad74-kube-api-access-vh88h\") pod \"calico-apiserver-677457d5b4-vpvf6\" (UID: \"1f0e3b7e-83db-40ec-a4a9-0422da13ad74\") " pod="calico-apiserver/calico-apiserver-677457d5b4-vpvf6" Jan 17 12:16:10.175178 kubelet[2993]: I0117 12:16:10.174437 2993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55894bf0-6b41-4650-8279-8e544ed121c2-tigera-ca-bundle\") pod \"calico-kube-controllers-664db8f47c-wdfrk\" (UID: \"55894bf0-6b41-4650-8279-8e544ed121c2\") " pod="calico-system/calico-kube-controllers-664db8f47c-wdfrk" Jan 17 12:16:10.230996 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:16:10.233613 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:16:10.231054 systemd-resolved[1545]: Flushed all caches. Jan 17 12:16:10.361286 containerd[1664]: time="2025-01-17T12:16:10.360951813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d76r9,Uid:a1e52c8a-911e-4407-972d-34d4d2855eaf,Namespace:kube-system,Attempt:0,}" Jan 17 12:16:10.361478 containerd[1664]: time="2025-01-17T12:16:10.361452974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f6xfs,Uid:f48483d2-723e-4d57-bb35-8d26869637fa,Namespace:kube-system,Attempt:0,}" Jan 17 12:16:10.363276 containerd[1664]: time="2025-01-17T12:16:10.362949287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664db8f47c-wdfrk,Uid:55894bf0-6b41-4650-8279-8e544ed121c2,Namespace:calico-system,Attempt:0,}" Jan 17 12:16:10.363276 containerd[1664]: time="2025-01-17T12:16:10.363057131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677457d5b4-6qjzb,Uid:df8c74b1-88c2-467d-afc5-74596a44f7fb,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:16:10.363276 containerd[1664]: time="2025-01-17T12:16:10.363138066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677457d5b4-vpvf6,Uid:1f0e3b7e-83db-40ec-a4a9-0422da13ad74,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:16:10.611080 containerd[1664]: time="2025-01-17T12:16:10.611046571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vq7j,Uid:280d2533-efae-41ef-91aa-7697f939417d,Namespace:calico-system,Attempt:0,}" Jan 17 12:16:10.620312 containerd[1664]: time="2025-01-17T12:16:10.620278987Z" level=error msg="Failed to destroy network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.620665 containerd[1664]: time="2025-01-17T12:16:10.620594979Z" level=error msg="encountered an error cleaning up failed sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.624834 containerd[1664]: time="2025-01-17T12:16:10.624776354Z" level=error msg="Failed to destroy network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.625213 containerd[1664]: time="2025-01-17T12:16:10.625037530Z" level=error msg="encountered an error cleaning up failed sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.625213 containerd[1664]: time="2025-01-17T12:16:10.625068229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677457d5b4-vpvf6,Uid:1f0e3b7e-83db-40ec-a4a9-0422da13ad74,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.628206 containerd[1664]: time="2025-01-17T12:16:10.628189330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d76r9,Uid:a1e52c8a-911e-4407-972d-34d4d2855eaf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.631005 containerd[1664]: time="2025-01-17T12:16:10.630981310Z" level=error msg="Failed to destroy network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.631258 containerd[1664]: time="2025-01-17T12:16:10.631245124Z" level=error msg="encountered an error cleaning up failed sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.631345 containerd[1664]: time="2025-01-17T12:16:10.631306772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f6xfs,Uid:f48483d2-723e-4d57-bb35-8d26869637fa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.636123 containerd[1664]: time="2025-01-17T12:16:10.635928686Z" level=error msg="Failed to destroy network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.636331 containerd[1664]: time="2025-01-17T12:16:10.636316882Z" level=error msg="encountered an error cleaning up failed sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.636407 containerd[1664]: time="2025-01-17T12:16:10.636394186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677457d5b4-6qjzb,Uid:df8c74b1-88c2-467d-afc5-74596a44f7fb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.637143 containerd[1664]: time="2025-01-17T12:16:10.637128869Z" level=error msg="Failed to destroy network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.637437 containerd[1664]: time="2025-01-17T12:16:10.637337523Z" level=error msg="encountered an error cleaning up failed sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.637437 containerd[1664]: time="2025-01-17T12:16:10.637358330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664db8f47c-wdfrk,Uid:55894bf0-6b41-4650-8279-8e544ed121c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.639194 kubelet[2993]: E0117 12:16:10.638830 2993 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.639194 kubelet[2993]: E0117 12:16:10.638906 2993 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d76r9" Jan 17 12:16:10.639194 kubelet[2993]: E0117 12:16:10.638927 2993 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d76r9" Jan 17 12:16:10.639279 kubelet[2993]: E0117 12:16:10.638977 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-d76r9_kube-system(a1e52c8a-911e-4407-972d-34d4d2855eaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-d76r9_kube-system(a1e52c8a-911e-4407-972d-34d4d2855eaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-d76r9" podUID="a1e52c8a-911e-4407-972d-34d4d2855eaf" Jan 17 12:16:10.639912 kubelet[2993]: E0117 12:16:10.639430 2993 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.639912 kubelet[2993]: E0117 12:16:10.639458 2993 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-677457d5b4-vpvf6" Jan 17 12:16:10.639912 kubelet[2993]: E0117 12:16:10.639472 2993 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-677457d5b4-vpvf6" Jan 17 12:16:10.640025 kubelet[2993]: E0117 12:16:10.639522 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677457d5b4-vpvf6_calico-apiserver(1f0e3b7e-83db-40ec-a4a9-0422da13ad74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677457d5b4-vpvf6_calico-apiserver(1f0e3b7e-83db-40ec-a4a9-0422da13ad74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-677457d5b4-vpvf6" podUID="1f0e3b7e-83db-40ec-a4a9-0422da13ad74" Jan 17 12:16:10.640025 kubelet[2993]: E0117 12:16:10.639550 2993 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.640025 kubelet[2993]: E0117 12:16:10.639564 2993 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f6xfs" Jan 17 12:16:10.640135 kubelet[2993]: E0117 12:16:10.639583 2993 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f6xfs" Jan 17 12:16:10.640135 kubelet[2993]: E0117 12:16:10.639603 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-f6xfs_kube-system(f48483d2-723e-4d57-bb35-8d26869637fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-f6xfs_kube-system(f48483d2-723e-4d57-bb35-8d26869637fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f6xfs" podUID="f48483d2-723e-4d57-bb35-8d26869637fa" Jan 17 12:16:10.640135 kubelet[2993]: E0117 12:16:10.639628 2993 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.640223 kubelet[2993]: E0117 12:16:10.639638 2993 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-677457d5b4-6qjzb" Jan 17 12:16:10.640223 kubelet[2993]: E0117 12:16:10.639657 2993 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-677457d5b4-6qjzb" Jan 17 12:16:10.640223 kubelet[2993]: E0117 12:16:10.639675 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677457d5b4-6qjzb_calico-apiserver(df8c74b1-88c2-467d-afc5-74596a44f7fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677457d5b4-6qjzb_calico-apiserver(df8c74b1-88c2-467d-afc5-74596a44f7fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-677457d5b4-6qjzb" podUID="df8c74b1-88c2-467d-afc5-74596a44f7fb" Jan 17 12:16:10.641207 kubelet[2993]: E0117 12:16:10.640287 2993 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.641207 kubelet[2993]: E0117 12:16:10.640305 2993 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664db8f47c-wdfrk" Jan 17 12:16:10.642639 kubelet[2993]: E0117 12:16:10.642623 2993 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664db8f47c-wdfrk" Jan 17 12:16:10.642838 kubelet[2993]: E0117 12:16:10.642665 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-664db8f47c-wdfrk_calico-system(55894bf0-6b41-4650-8279-8e544ed121c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-664db8f47c-wdfrk_calico-system(55894bf0-6b41-4650-8279-8e544ed121c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-664db8f47c-wdfrk" podUID="55894bf0-6b41-4650-8279-8e544ed121c2" Jan 17 12:16:10.674586 containerd[1664]: time="2025-01-17T12:16:10.674514364Z" level=error msg="Failed to destroy network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.674930 containerd[1664]: time="2025-01-17T12:16:10.674826025Z" level=error msg="encountered an error cleaning up failed sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.674930 containerd[1664]: time="2025-01-17T12:16:10.674903461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vq7j,Uid:280d2533-efae-41ef-91aa-7697f939417d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.675085 kubelet[2993]: E0117 12:16:10.675070 2993 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.675121 kubelet[2993]: E0117 12:16:10.675108 2993 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5vq7j" Jan 17 12:16:10.675147 kubelet[2993]: E0117 12:16:10.675122 2993 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5vq7j" Jan 17 12:16:10.675170 kubelet[2993]: E0117 12:16:10.675157 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5vq7j_calico-system(280d2533-efae-41ef-91aa-7697f939417d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5vq7j_calico-system(280d2533-efae-41ef-91aa-7697f939417d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5vq7j" podUID="280d2533-efae-41ef-91aa-7697f939417d" Jan 17 12:16:10.721019 kubelet[2993]: I0117 12:16:10.720466 2993 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:10.728912 containerd[1664]: time="2025-01-17T12:16:10.726843692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:16:10.730183 kubelet[2993]: I0117 12:16:10.729279 2993 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:10.747818 kubelet[2993]: I0117 12:16:10.747078 2993 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:10.749173 kubelet[2993]: I0117 12:16:10.748897 2993 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:10.750540 kubelet[2993]: I0117 12:16:10.750526 2993 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:10.752457 kubelet[2993]: I0117 12:16:10.752446 2993 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:10.772906 containerd[1664]: time="2025-01-17T12:16:10.772370660Z" level=info msg="StopPodSandbox for \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\"" Jan 17 12:16:10.773001 containerd[1664]: time="2025-01-17T12:16:10.772988748Z" level=info msg="StopPodSandbox for \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\"" Jan 17 12:16:10.773429 containerd[1664]: time="2025-01-17T12:16:10.773417258Z" level=info msg="Ensure that sandbox f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c in task-service has been cleanup successfully" Jan 17 12:16:10.773553 containerd[1664]: time="2025-01-17T12:16:10.773530088Z" level=info msg="StopPodSandbox for \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\"" Jan 17 12:16:10.773636 containerd[1664]: time="2025-01-17T12:16:10.773617572Z" level=info msg="Ensure that sandbox 61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3 in task-service has been cleanup successfully" Jan 17 12:16:10.773977 containerd[1664]: time="2025-01-17T12:16:10.773416048Z" level=info msg="Ensure that sandbox fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0 in task-service has been cleanup successfully" Jan 17 12:16:10.774605 containerd[1664]: time="2025-01-17T12:16:10.773434516Z" level=info msg="StopPodSandbox for \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\"" Jan 17 12:16:10.774679 containerd[1664]: time="2025-01-17T12:16:10.774664505Z" level=info msg="Ensure that sandbox cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a in task-service has been cleanup successfully" Jan 17 12:16:10.775182 containerd[1664]: time="2025-01-17T12:16:10.773453070Z" level=info msg="StopPodSandbox for \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\"" Jan 17 12:16:10.775249 containerd[1664]: time="2025-01-17T12:16:10.775237910Z" level=info msg="Ensure that sandbox 6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a in task-service has been cleanup successfully" Jan 17 12:16:10.775491 containerd[1664]: time="2025-01-17T12:16:10.773467285Z" level=info msg="StopPodSandbox for \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\"" Jan 17 12:16:10.776160 containerd[1664]: time="2025-01-17T12:16:10.776148566Z" level=info msg="Ensure that sandbox 41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458 in task-service has been cleanup successfully" Jan 17 12:16:10.808005 containerd[1664]: time="2025-01-17T12:16:10.807972599Z" level=error msg="StopPodSandbox for \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\" failed" error="failed to destroy network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.808442 kubelet[2993]: E0117 12:16:10.808138 2993 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:10.818529 kubelet[2993]: E0117 12:16:10.818502 2993 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3"} Jan 17 12:16:10.818641 kubelet[2993]: E0117 12:16:10.818547 2993 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df8c74b1-88c2-467d-afc5-74596a44f7fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:16:10.818641 kubelet[2993]: E0117 12:16:10.818570 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df8c74b1-88c2-467d-afc5-74596a44f7fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-677457d5b4-6qjzb" podUID="df8c74b1-88c2-467d-afc5-74596a44f7fb" Jan 17 12:16:10.826762 containerd[1664]: time="2025-01-17T12:16:10.826589250Z" level=error msg="StopPodSandbox for \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\" failed" error="failed to destroy network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.826860 kubelet[2993]: E0117 12:16:10.826742 2993 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:10.826860 kubelet[2993]: E0117 12:16:10.826772 2993 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458"} Jan 17 12:16:10.826860 kubelet[2993]: E0117 12:16:10.826819 2993 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f0e3b7e-83db-40ec-a4a9-0422da13ad74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:16:10.826860 kubelet[2993]: E0117 12:16:10.826840 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f0e3b7e-83db-40ec-a4a9-0422da13ad74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-677457d5b4-vpvf6" podUID="1f0e3b7e-83db-40ec-a4a9-0422da13ad74" Jan 17 12:16:10.830298 containerd[1664]: time="2025-01-17T12:16:10.830128402Z" level=error msg="StopPodSandbox for \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\" failed" error="failed to destroy network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.830298 containerd[1664]: time="2025-01-17T12:16:10.830225882Z" level=error msg="StopPodSandbox for \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\" failed" error="failed to destroy network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.830805 kubelet[2993]: E0117 12:16:10.830774 2993 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:10.830805 kubelet[2993]: E0117 12:16:10.830806 2993 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c"} Jan 17 12:16:10.830917 kubelet[2993]: E0117 12:16:10.830828 2993 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1e52c8a-911e-4407-972d-34d4d2855eaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:16:10.830917 kubelet[2993]: E0117 12:16:10.830847 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1e52c8a-911e-4407-972d-34d4d2855eaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-d76r9" podUID="a1e52c8a-911e-4407-972d-34d4d2855eaf" Jan 17 12:16:10.831011 kubelet[2993]: E0117 12:16:10.830956 2993 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:10.831011 kubelet[2993]: E0117 12:16:10.830966 2993 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0"} Jan 17 12:16:10.831011 kubelet[2993]: E0117 12:16:10.830983 2993 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55894bf0-6b41-4650-8279-8e544ed121c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:16:10.831011 kubelet[2993]: E0117 12:16:10.830998 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55894bf0-6b41-4650-8279-8e544ed121c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-664db8f47c-wdfrk" podUID="55894bf0-6b41-4650-8279-8e544ed121c2" Jan 17 12:16:10.831131 containerd[1664]: time="2025-01-17T12:16:10.830777356Z" level=error msg="StopPodSandbox for \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\" failed" error="failed to destroy network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.831131 containerd[1664]: time="2025-01-17T12:16:10.831052661Z" level=error msg="StopPodSandbox for \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\" failed" error="failed to destroy network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:16:10.831174 kubelet[2993]: E0117 12:16:10.831123 2993 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:10.831174 kubelet[2993]: E0117 12:16:10.831135 2993 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a"} Jan 17 12:16:10.831174 kubelet[2993]: E0117 12:16:10.831151 2993 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"280d2533-efae-41ef-91aa-7697f939417d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:16:10.831174 kubelet[2993]: E0117 12:16:10.831164 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"280d2533-efae-41ef-91aa-7697f939417d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5vq7j" podUID="280d2533-efae-41ef-91aa-7697f939417d" Jan 17 12:16:10.834115 kubelet[2993]: E0117 12:16:10.834100 2993 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:10.834157 kubelet[2993]: E0117 12:16:10.834121 2993 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a"} Jan 17 12:16:10.834157 kubelet[2993]: E0117 12:16:10.834139 2993 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f48483d2-723e-4d57-bb35-8d26869637fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:16:10.834157 kubelet[2993]: E0117 12:16:10.834154 2993 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f48483d2-723e-4d57-bb35-8d26869637fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f6xfs" podUID="f48483d2-723e-4d57-bb35-8d26869637fa" Jan 17 12:16:15.835447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858412332.mount: Deactivated successfully. Jan 17 12:16:16.069383 containerd[1664]: time="2025-01-17T12:16:16.069342744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:16.074517 containerd[1664]: time="2025-01-17T12:16:16.074310053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:16:16.108174 containerd[1664]: time="2025-01-17T12:16:16.108116925Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:16.125265 containerd[1664]: time="2025-01-17T12:16:16.125223582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:16.126015 containerd[1664]: time="2025-01-17T12:16:16.125633658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.398759583s" Jan 17 12:16:16.126015 containerd[1664]: time="2025-01-17T12:16:16.125675440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:16:16.188773 containerd[1664]: time="2025-01-17T12:16:16.188741729Z" level=info msg="CreateContainer within sandbox \"95ebc0f01e4a04dd7831a8a36c8493ec3df6462eef10ff75898b3fb3c15551eb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:16:16.245889 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:16:16.245921 systemd-resolved[1545]: Flushed all caches. Jan 17 12:16:16.247811 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:16:16.312092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749312353.mount: Deactivated successfully. Jan 17 12:16:16.490854 containerd[1664]: time="2025-01-17T12:16:16.490757867Z" level=info msg="CreateContainer within sandbox \"95ebc0f01e4a04dd7831a8a36c8493ec3df6462eef10ff75898b3fb3c15551eb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e6d3d9edcf7466f2eda8868e32138f94c9f2d2a974bd89f124bf50e767bf895b\"" Jan 17 12:16:16.512983 containerd[1664]: time="2025-01-17T12:16:16.512969214Z" level=info msg="StartContainer for \"e6d3d9edcf7466f2eda8868e32138f94c9f2d2a974bd89f124bf50e767bf895b\"" Jan 17 12:16:16.651453 containerd[1664]: time="2025-01-17T12:16:16.651427585Z" level=info msg="StartContainer for \"e6d3d9edcf7466f2eda8868e32138f94c9f2d2a974bd89f124bf50e767bf895b\" returns successfully" Jan 17 12:16:17.652603 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:16:17.652727 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:16:18.294927 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:16:18.294932 systemd-resolved[1545]: Flushed all caches. Jan 17 12:16:18.296827 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:16:18.784269 kubelet[2993]: I0117 12:16:18.784190 2993 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:16:18.875915 kubelet[2993]: I0117 12:16:18.875653 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-p4d85" podStartSLOduration=3.984339192 podStartE2EDuration="20.846238115s" podCreationTimestamp="2025-01-17 12:15:58 +0000 UTC" firstStartedPulling="2025-01-17 12:15:59.264024155 +0000 UTC m=+22.777132174" lastFinishedPulling="2025-01-17 12:16:16.125923072 +0000 UTC m=+39.639031097" observedRunningTime="2025-01-17 12:16:16.991035735 +0000 UTC m=+40.504143779" watchObservedRunningTime="2025-01-17 12:16:18.846238115 +0000 UTC m=+42.359346143" Jan 17 12:16:19.315808 kernel: bpftool[4267]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:16:19.519277 systemd-networkd[1293]: vxlan.calico: Link UP Jan 17 12:16:19.519282 systemd-networkd[1293]: vxlan.calico: Gained carrier Jan 17 12:16:20.981918 systemd-networkd[1293]: vxlan.calico: Gained IPv6LL Jan 17 12:16:21.606163 containerd[1664]: time="2025-01-17T12:16:21.605908984Z" level=info msg="StopPodSandbox for \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\"" Jan 17 12:16:21.606163 containerd[1664]: time="2025-01-17T12:16:21.605947622Z" level=info msg="StopPodSandbox for \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\"" Jan 17 12:16:21.607371 containerd[1664]: time="2025-01-17T12:16:21.605916949Z" level=info msg="StopPodSandbox for \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\"" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.036 [INFO][4399] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.043 [INFO][4399] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" iface="eth0" netns="/var/run/netns/cni-f82f051d-a331-1d10-60ac-c03206cea886" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.044 [INFO][4399] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" iface="eth0" netns="/var/run/netns/cni-f82f051d-a331-1d10-60ac-c03206cea886" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.048 [INFO][4399] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" iface="eth0" netns="/var/run/netns/cni-f82f051d-a331-1d10-60ac-c03206cea886" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.048 [INFO][4399] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.049 [INFO][4399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.667 [INFO][4418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" HandleID="k8s-pod-network.61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.670 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.670 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.756 [WARNING][4418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" HandleID="k8s-pod-network.61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.756 [INFO][4418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" HandleID="k8s-pod-network.61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.757 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:22.764809 containerd[1664]: 2025-01-17 12:16:22.759 [INFO][4399] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:22.771694 containerd[1664]: time="2025-01-17T12:16:22.765818899Z" level=info msg="TearDown network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\" successfully" Jan 17 12:16:22.771694 containerd[1664]: time="2025-01-17T12:16:22.765869131Z" level=info msg="StopPodSandbox for \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\" returns successfully" Jan 17 12:16:22.771694 containerd[1664]: time="2025-01-17T12:16:22.767777591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677457d5b4-6qjzb,Uid:df8c74b1-88c2-467d-afc5-74596a44f7fb,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:16:22.767278 systemd[1]: run-netns-cni\x2df82f051d\x2da331\x2d1d10\x2d60ac\x2dc03206cea886.mount: Deactivated successfully. Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.046 [INFO][4400] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.046 [INFO][4400] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" iface="eth0" netns="/var/run/netns/cni-027dafc6-c385-d2df-3c4e-a3864f2e4213" Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.046 [INFO][4400] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" iface="eth0" netns="/var/run/netns/cni-027dafc6-c385-d2df-3c4e-a3864f2e4213" Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.049 [INFO][4400] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" iface="eth0" netns="/var/run/netns/cni-027dafc6-c385-d2df-3c4e-a3864f2e4213" Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.049 [INFO][4400] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.049 [INFO][4400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.664 [INFO][4419] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" HandleID="k8s-pod-network.fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.670 [INFO][4419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.757 [INFO][4419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.768 [WARNING][4419] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" HandleID="k8s-pod-network.fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.768 [INFO][4419] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" HandleID="k8s-pod-network.fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.769 [INFO][4419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:22.780430 containerd[1664]: 2025-01-17 12:16:22.774 [INFO][4400] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:22.780430 containerd[1664]: time="2025-01-17T12:16:22.777441254Z" level=info msg="TearDown network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\" successfully" Jan 17 12:16:22.780430 containerd[1664]: time="2025-01-17T12:16:22.777460217Z" level=info msg="StopPodSandbox for \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\" returns successfully" Jan 17 12:16:22.780430 containerd[1664]: time="2025-01-17T12:16:22.778536998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664db8f47c-wdfrk,Uid:55894bf0-6b41-4650-8279-8e544ed121c2,Namespace:calico-system,Attempt:1,}" Jan 17 12:16:22.778654 systemd[1]: run-netns-cni\x2d027dafc6\x2dc385\x2dd2df\x2d3c4e\x2da3864f2e4213.mount: Deactivated successfully. Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.045 [INFO][4398] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.045 [INFO][4398] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" iface="eth0" netns="/var/run/netns/cni-ab032612-d0eb-80a9-ea18-f94154fbfeb4" Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.045 [INFO][4398] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" iface="eth0" netns="/var/run/netns/cni-ab032612-d0eb-80a9-ea18-f94154fbfeb4" Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.048 [INFO][4398] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" iface="eth0" netns="/var/run/netns/cni-ab032612-d0eb-80a9-ea18-f94154fbfeb4" Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.048 [INFO][4398] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.048 [INFO][4398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.664 [INFO][4417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" HandleID="k8s-pod-network.41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.670 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.769 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.777 [WARNING][4417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" HandleID="k8s-pod-network.41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.777 [INFO][4417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" HandleID="k8s-pod-network.41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.780 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:22.784640 containerd[1664]: 2025-01-17 12:16:22.782 [INFO][4398] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:22.789505 containerd[1664]: time="2025-01-17T12:16:22.785251690Z" level=info msg="TearDown network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\" successfully" Jan 17 12:16:22.789505 containerd[1664]: time="2025-01-17T12:16:22.785273008Z" level=info msg="StopPodSandbox for \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\" returns successfully" Jan 17 12:16:22.789505 containerd[1664]: time="2025-01-17T12:16:22.785895081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677457d5b4-vpvf6,Uid:1f0e3b7e-83db-40ec-a4a9-0422da13ad74,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:16:22.787477 systemd[1]: run-netns-cni\x2dab032612\x2dd0eb\x2d80a9\x2dea18\x2df94154fbfeb4.mount: Deactivated successfully. Jan 17 12:16:23.033672 systemd-networkd[1293]: cali89f7a6e8440: Link UP Jan 17 12:16:23.036506 systemd-networkd[1293]: cali89f7a6e8440: Gained carrier Jan 17 12:16:23.061139 systemd-networkd[1293]: cali1eee4e4e211: Link UP Jan 17 12:16:23.061879 systemd-networkd[1293]: cali1eee4e4e211: Gained carrier Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:22.914 [INFO][4449] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0 calico-apiserver-677457d5b4- calico-apiserver 1f0e3b7e-83db-40ec-a4a9-0422da13ad74 745 0 2025-01-17 12:15:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677457d5b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-677457d5b4-vpvf6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89f7a6e8440 [] []}} ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-vpvf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:22.914 [INFO][4449] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-vpvf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:22.943 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" HandleID="k8s-pod-network.8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:22.963 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" HandleID="k8s-pod-network.8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-677457d5b4-vpvf6", "timestamp":"2025-01-17 12:16:22.943479525 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:22.963 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:22.963 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:22.963 [INFO][4476] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:22.964 [INFO][4476] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" host="localhost" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.007 [INFO][4476] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.010 [INFO][4476] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.011 [INFO][4476] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.013 [INFO][4476] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.013 [INFO][4476] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" host="localhost" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.014 [INFO][4476] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5 Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.017 [INFO][4476] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" host="localhost" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.021 [INFO][4476] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" host="localhost" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.021 [INFO][4476] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" host="localhost" Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.021 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:23.074313 containerd[1664]: 2025-01-17 12:16:23.021 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" HandleID="k8s-pod-network.8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:23.075017 containerd[1664]: 2025-01-17 12:16:23.023 [INFO][4449] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-vpvf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0", GenerateName:"calico-apiserver-677457d5b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f0e3b7e-83db-40ec-a4a9-0422da13ad74", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677457d5b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-677457d5b4-vpvf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89f7a6e8440", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:23.075017 containerd[1664]: 2025-01-17 12:16:23.023 [INFO][4449] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-vpvf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:23.075017 containerd[1664]: 2025-01-17 12:16:23.023 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89f7a6e8440 ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-vpvf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:23.075017 containerd[1664]: 2025-01-17 12:16:23.046 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-vpvf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:23.075017 containerd[1664]: 2025-01-17 12:16:23.047 [INFO][4449] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-vpvf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0", GenerateName:"calico-apiserver-677457d5b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f0e3b7e-83db-40ec-a4a9-0422da13ad74", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677457d5b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5", Pod:"calico-apiserver-677457d5b4-vpvf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89f7a6e8440", MAC:"16:6f:28:67:bd:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:23.075017 containerd[1664]: 2025-01-17 12:16:23.071 [INFO][4449] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-vpvf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:22.872 [INFO][4439] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0 calico-kube-controllers-664db8f47c- calico-system 55894bf0-6b41-4650-8279-8e544ed121c2 746 0 2025-01-17 12:15:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:664db8f47c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-664db8f47c-wdfrk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1eee4e4e211 [] []}} ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Namespace="calico-system" Pod="calico-kube-controllers-664db8f47c-wdfrk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:22.872 [INFO][4439] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Namespace="calico-system" Pod="calico-kube-controllers-664db8f47c-wdfrk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:22.944 [INFO][4471] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" HandleID="k8s-pod-network.e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:22.964 [INFO][4471] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" HandleID="k8s-pod-network.e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319960), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-664db8f47c-wdfrk", "timestamp":"2025-01-17 12:16:22.944511524 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:22.964 [INFO][4471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.021 [INFO][4471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.021 [INFO][4471] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.022 [INFO][4471] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" host="localhost" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.027 [INFO][4471] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.040 [INFO][4471] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.042 [INFO][4471] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.043 [INFO][4471] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.043 [INFO][4471] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" host="localhost" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.044 [INFO][4471] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.051 [INFO][4471] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" host="localhost" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.055 [INFO][4471] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" host="localhost" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.055 [INFO][4471] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" host="localhost" Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.055 [INFO][4471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:23.114647 containerd[1664]: 2025-01-17 12:16:23.055 [INFO][4471] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" HandleID="k8s-pod-network.e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:23.124329 containerd[1664]: 2025-01-17 12:16:23.057 [INFO][4439] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Namespace="calico-system" Pod="calico-kube-controllers-664db8f47c-wdfrk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0", GenerateName:"calico-kube-controllers-664db8f47c-", Namespace:"calico-system", SelfLink:"", UID:"55894bf0-6b41-4650-8279-8e544ed121c2", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664db8f47c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-664db8f47c-wdfrk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eee4e4e211", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:23.124329 containerd[1664]: 2025-01-17 12:16:23.057 [INFO][4439] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Namespace="calico-system" Pod="calico-kube-controllers-664db8f47c-wdfrk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:23.124329 containerd[1664]: 2025-01-17 12:16:23.057 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1eee4e4e211 ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Namespace="calico-system" Pod="calico-kube-controllers-664db8f47c-wdfrk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:23.124329 containerd[1664]: 2025-01-17 12:16:23.062 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Namespace="calico-system" Pod="calico-kube-controllers-664db8f47c-wdfrk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:23.124329 containerd[1664]: 2025-01-17 12:16:23.062 [INFO][4439] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Namespace="calico-system" Pod="calico-kube-controllers-664db8f47c-wdfrk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0", GenerateName:"calico-kube-controllers-664db8f47c-", Namespace:"calico-system", SelfLink:"", UID:"55894bf0-6b41-4650-8279-8e544ed121c2", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664db8f47c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf", Pod:"calico-kube-controllers-664db8f47c-wdfrk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eee4e4e211", MAC:"2e:f3:1e:63:53:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:23.124329 containerd[1664]: 2025-01-17 12:16:23.109 [INFO][4439] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf" Namespace="calico-system" Pod="calico-kube-controllers-664db8f47c-wdfrk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:23.132148 systemd-networkd[1293]: cali35746c1d484: Link UP Jan 17 12:16:23.135935 systemd-networkd[1293]: cali35746c1d484: Gained carrier Jan 17 12:16:23.143499 containerd[1664]: time="2025-01-17T12:16:23.143371745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:16:23.143499 containerd[1664]: time="2025-01-17T12:16:23.143428159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:16:23.143499 containerd[1664]: time="2025-01-17T12:16:23.143442902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:23.143740 containerd[1664]: time="2025-01-17T12:16:23.143674895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:22.935 [INFO][4460] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0 calico-apiserver-677457d5b4- calico-apiserver df8c74b1-88c2-467d-afc5-74596a44f7fb 744 0 2025-01-17 12:15:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677457d5b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-677457d5b4-6qjzb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali35746c1d484 [] []}} ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-6qjzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:22.935 [INFO][4460] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-6qjzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:22.963 [INFO][4485] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" HandleID="k8s-pod-network.a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:22.968 [INFO][4485] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" HandleID="k8s-pod-network.a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-677457d5b4-6qjzb", "timestamp":"2025-01-17 12:16:22.963239882 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:22.968 [INFO][4485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.055 [INFO][4485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.055 [INFO][4485] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.058 [INFO][4485] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" host="localhost" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.071 [INFO][4485] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.075 [INFO][4485] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.076 [INFO][4485] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.080 [INFO][4485] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.080 [INFO][4485] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" host="localhost" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.107 [INFO][4485] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.116 [INFO][4485] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" host="localhost" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.122 [INFO][4485] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" host="localhost" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.122 [INFO][4485] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" host="localhost" Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.122 [INFO][4485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:23.166105 containerd[1664]: 2025-01-17 12:16:23.122 [INFO][4485] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" HandleID="k8s-pod-network.a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:23.170333 containerd[1664]: 2025-01-17 12:16:23.126 [INFO][4460] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-6qjzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0", GenerateName:"calico-apiserver-677457d5b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"df8c74b1-88c2-467d-afc5-74596a44f7fb", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677457d5b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-677457d5b4-6qjzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35746c1d484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:23.170333 containerd[1664]: 2025-01-17 12:16:23.126 [INFO][4460] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-6qjzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:23.170333 containerd[1664]: 2025-01-17 12:16:23.126 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35746c1d484 ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-6qjzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:23.170333 containerd[1664]: 2025-01-17 12:16:23.139 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-6qjzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:23.170333 containerd[1664]: 2025-01-17 12:16:23.142 [INFO][4460] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-6qjzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0", GenerateName:"calico-apiserver-677457d5b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"df8c74b1-88c2-467d-afc5-74596a44f7fb", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677457d5b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da", Pod:"calico-apiserver-677457d5b4-6qjzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35746c1d484", MAC:"12:55:a4:a9:58:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:23.170333 containerd[1664]: 2025-01-17 12:16:23.161 [INFO][4460] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da" Namespace="calico-apiserver" Pod="calico-apiserver-677457d5b4-6qjzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:23.170333 containerd[1664]: time="2025-01-17T12:16:23.169404894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:16:23.170333 containerd[1664]: time="2025-01-17T12:16:23.169446288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:16:23.170333 containerd[1664]: time="2025-01-17T12:16:23.169462610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:23.170333 containerd[1664]: time="2025-01-17T12:16:23.169529391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:23.185662 systemd-resolved[1545]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:16:23.192423 containerd[1664]: time="2025-01-17T12:16:23.192360498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:16:23.192423 containerd[1664]: time="2025-01-17T12:16:23.192399866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:16:23.193034 containerd[1664]: time="2025-01-17T12:16:23.192711404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:23.193271 containerd[1664]: time="2025-01-17T12:16:23.193239066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:23.197484 systemd-resolved[1545]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:16:23.220092 systemd-resolved[1545]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:16:23.247889 containerd[1664]: time="2025-01-17T12:16:23.247854242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677457d5b4-vpvf6,Uid:1f0e3b7e-83db-40ec-a4a9-0422da13ad74,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5\"" Jan 17 12:16:23.250272 containerd[1664]: time="2025-01-17T12:16:23.250227626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:16:23.257906 containerd[1664]: time="2025-01-17T12:16:23.257848388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664db8f47c-wdfrk,Uid:55894bf0-6b41-4650-8279-8e544ed121c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf\"" Jan 17 12:16:23.263896 containerd[1664]: time="2025-01-17T12:16:23.263870723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677457d5b4-6qjzb,Uid:df8c74b1-88c2-467d-afc5-74596a44f7fb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da\"" Jan 17 12:16:23.610330 containerd[1664]: time="2025-01-17T12:16:23.610298851Z" level=info msg="StopPodSandbox for \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\"" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.648 [INFO][4666] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.648 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" iface="eth0" netns="/var/run/netns/cni-f4e03652-b984-44e9-467c-cee8e958bc68" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.648 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" iface="eth0" netns="/var/run/netns/cni-f4e03652-b984-44e9-467c-cee8e958bc68" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.648 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" iface="eth0" netns="/var/run/netns/cni-f4e03652-b984-44e9-467c-cee8e958bc68" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.648 [INFO][4666] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.648 [INFO][4666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.664 [INFO][4672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" HandleID="k8s-pod-network.6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.664 [INFO][4672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.664 [INFO][4672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.669 [WARNING][4672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" HandleID="k8s-pod-network.6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.669 [INFO][4672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" HandleID="k8s-pod-network.6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.669 [INFO][4672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:23.672274 containerd[1664]: 2025-01-17 12:16:23.671 [INFO][4666] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:23.672976 containerd[1664]: time="2025-01-17T12:16:23.672382047Z" level=info msg="TearDown network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\" successfully" Jan 17 12:16:23.672976 containerd[1664]: time="2025-01-17T12:16:23.672402458Z" level=info msg="StopPodSandbox for \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\" returns successfully" Jan 17 12:16:23.678517 containerd[1664]: time="2025-01-17T12:16:23.673197900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f6xfs,Uid:f48483d2-723e-4d57-bb35-8d26869637fa,Namespace:kube-system,Attempt:1,}" Jan 17 12:16:23.768551 systemd[1]: run-netns-cni\x2df4e03652\x2db984\x2d44e9\x2d467c\x2dcee8e958bc68.mount: Deactivated successfully. Jan 17 12:16:23.798177 systemd-networkd[1293]: cali2fabc9f08ab: Link UP Jan 17 12:16:23.798777 systemd-networkd[1293]: cali2fabc9f08ab: Gained carrier Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.727 [INFO][4679] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--f6xfs-eth0 coredns-76f75df574- kube-system f48483d2-723e-4d57-bb35-8d26869637fa 763 0 2025-01-17 12:15:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-f6xfs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2fabc9f08ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Namespace="kube-system" Pod="coredns-76f75df574-f6xfs" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f6xfs-" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.727 [INFO][4679] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Namespace="kube-system" Pod="coredns-76f75df574-f6xfs" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.761 [INFO][4689] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" HandleID="k8s-pod-network.330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.771 [INFO][4689] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" HandleID="k8s-pod-network.330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334030), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-f6xfs", "timestamp":"2025-01-17 12:16:23.761750825 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.771 [INFO][4689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.771 [INFO][4689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.771 [INFO][4689] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.772 [INFO][4689] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" host="localhost" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.774 [INFO][4689] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.777 [INFO][4689] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.777 [INFO][4689] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.778 [INFO][4689] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.778 [INFO][4689] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" host="localhost" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.779 [INFO][4689] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4 Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.789 [INFO][4689] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" host="localhost" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.794 [INFO][4689] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" host="localhost" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.794 [INFO][4689] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" host="localhost" Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.794 [INFO][4689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:23.812969 containerd[1664]: 2025-01-17 12:16:23.794 [INFO][4689] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" HandleID="k8s-pod-network.330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.815895 containerd[1664]: 2025-01-17 12:16:23.796 [INFO][4679] cni-plugin/k8s.go 386: Populated endpoint ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Namespace="kube-system" Pod="coredns-76f75df574-f6xfs" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f6xfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f6xfs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f48483d2-723e-4d57-bb35-8d26869637fa", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-f6xfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fabc9f08ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:23.815895 containerd[1664]: 2025-01-17 12:16:23.796 [INFO][4679] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Namespace="kube-system" Pod="coredns-76f75df574-f6xfs" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.815895 containerd[1664]: 2025-01-17 12:16:23.796 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fabc9f08ab ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Namespace="kube-system" Pod="coredns-76f75df574-f6xfs" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.815895 containerd[1664]: 2025-01-17 12:16:23.798 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Namespace="kube-system" Pod="coredns-76f75df574-f6xfs" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.815895 containerd[1664]: 2025-01-17 12:16:23.799 [INFO][4679] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Namespace="kube-system" Pod="coredns-76f75df574-f6xfs" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f6xfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f6xfs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f48483d2-723e-4d57-bb35-8d26869637fa", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4", Pod:"coredns-76f75df574-f6xfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fabc9f08ab", MAC:"46:a1:1c:13:25:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:23.815895 containerd[1664]: 2025-01-17 12:16:23.810 [INFO][4679] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4" Namespace="kube-system" Pod="coredns-76f75df574-f6xfs" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:23.832038 containerd[1664]: time="2025-01-17T12:16:23.831572928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:16:23.832130 containerd[1664]: time="2025-01-17T12:16:23.832014999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:16:23.832130 containerd[1664]: time="2025-01-17T12:16:23.832028487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:23.832130 containerd[1664]: time="2025-01-17T12:16:23.832091283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:23.855387 systemd[1]: run-containerd-runc-k8s.io-330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4-runc.kVAmCv.mount: Deactivated successfully. Jan 17 12:16:23.862464 systemd-resolved[1545]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:16:23.885596 containerd[1664]: time="2025-01-17T12:16:23.885562284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f6xfs,Uid:f48483d2-723e-4d57-bb35-8d26869637fa,Namespace:kube-system,Attempt:1,} returns sandbox id \"330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4\"" Jan 17 12:16:23.888247 containerd[1664]: time="2025-01-17T12:16:23.888225822Z" level=info msg="CreateContainer within sandbox \"330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:16:24.008739 containerd[1664]: time="2025-01-17T12:16:24.008703171Z" level=info msg="CreateContainer within sandbox \"330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1d691a693d92d080c78f74c83fe9397e774f267776383a077b6eaf90b0c9327\"" Jan 17 12:16:24.009387 containerd[1664]: time="2025-01-17T12:16:24.009332854Z" level=info msg="StartContainer for \"b1d691a693d92d080c78f74c83fe9397e774f267776383a077b6eaf90b0c9327\"" Jan 17 12:16:24.053959 containerd[1664]: time="2025-01-17T12:16:24.053936031Z" level=info msg="StartContainer for \"b1d691a693d92d080c78f74c83fe9397e774f267776383a077b6eaf90b0c9327\" returns successfully" Jan 17 12:16:24.117940 systemd-networkd[1293]: cali89f7a6e8440: Gained IPv6LL Jan 17 12:16:24.182886 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:16:24.181879 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:16:24.181895 systemd-resolved[1545]: Flushed all caches. Jan 17 12:16:24.501977 systemd-networkd[1293]: cali35746c1d484: Gained IPv6LL Jan 17 12:16:24.630899 systemd-networkd[1293]: cali1eee4e4e211: Gained IPv6LL Jan 17 12:16:24.766536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653792975.mount: Deactivated successfully. Jan 17 12:16:24.971779 kubelet[2993]: I0117 12:16:24.971746 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f6xfs" podStartSLOduration=33.971703511 podStartE2EDuration="33.971703511s" podCreationTimestamp="2025-01-17 12:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:16:24.958382866 +0000 UTC m=+48.471490903" watchObservedRunningTime="2025-01-17 12:16:24.971703511 +0000 UTC m=+48.484811547" Jan 17 12:16:25.606975 containerd[1664]: time="2025-01-17T12:16:25.606946809Z" level=info msg="StopPodSandbox for \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\"" Jan 17 12:16:25.653972 systemd-networkd[1293]: cali2fabc9f08ab: Gained IPv6LL Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.673 [INFO][4811] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.675 [INFO][4811] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" iface="eth0" netns="/var/run/netns/cni-7cc753d1-4dc1-2924-7b48-ef76c80457d9" Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.676 [INFO][4811] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" iface="eth0" netns="/var/run/netns/cni-7cc753d1-4dc1-2924-7b48-ef76c80457d9" Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.676 [INFO][4811] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" iface="eth0" netns="/var/run/netns/cni-7cc753d1-4dc1-2924-7b48-ef76c80457d9" Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.676 [INFO][4811] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.676 [INFO][4811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.745 [INFO][4817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" HandleID="k8s-pod-network.f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.745 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.745 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.752 [WARNING][4817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" HandleID="k8s-pod-network.f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.752 [INFO][4817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" HandleID="k8s-pod-network.f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.753 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:25.760117 containerd[1664]: 2025-01-17 12:16:25.756 [INFO][4811] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:25.777613 containerd[1664]: time="2025-01-17T12:16:25.760873269Z" level=info msg="TearDown network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\" successfully" Jan 17 12:16:25.777613 containerd[1664]: time="2025-01-17T12:16:25.760891667Z" level=info msg="StopPodSandbox for \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\" returns successfully" Jan 17 12:16:25.777613 containerd[1664]: time="2025-01-17T12:16:25.762819848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d76r9,Uid:a1e52c8a-911e-4407-972d-34d4d2855eaf,Namespace:kube-system,Attempt:1,}" Jan 17 12:16:25.763482 systemd[1]: run-netns-cni\x2d7cc753d1\x2d4dc1\x2d2924\x2d7b48\x2def76c80457d9.mount: Deactivated successfully. Jan 17 12:16:26.354891 systemd-networkd[1293]: cali3616e33832e: Link UP Jan 17 12:16:26.355478 systemd-networkd[1293]: cali3616e33832e: Gained carrier Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.224 [INFO][4827] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--d76r9-eth0 coredns-76f75df574- kube-system a1e52c8a-911e-4407-972d-34d4d2855eaf 784 0 2025-01-17 12:15:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-d76r9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3616e33832e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Namespace="kube-system" Pod="coredns-76f75df574-d76r9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--d76r9-" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.224 [INFO][4827] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Namespace="kube-system" Pod="coredns-76f75df574-d76r9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.288 [INFO][4842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" HandleID="k8s-pod-network.dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.297 [INFO][4842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" HandleID="k8s-pod-network.dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003184e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-d76r9", "timestamp":"2025-01-17 12:16:26.288815897 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.297 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.297 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.297 [INFO][4842] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.300 [INFO][4842] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" host="localhost" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.304 [INFO][4842] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.312 [INFO][4842] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.317 [INFO][4842] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.320 [INFO][4842] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.320 [INFO][4842] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" host="localhost" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.324 [INFO][4842] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188 Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.333 [INFO][4842] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" host="localhost" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.349 [INFO][4842] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" host="localhost" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.349 [INFO][4842] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" host="localhost" Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.349 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:26.382486 containerd[1664]: 2025-01-17 12:16:26.349 [INFO][4842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" HandleID="k8s-pod-network.dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:26.404961 containerd[1664]: 2025-01-17 12:16:26.353 [INFO][4827] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Namespace="kube-system" Pod="coredns-76f75df574-d76r9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--d76r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--d76r9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a1e52c8a-911e-4407-972d-34d4d2855eaf", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-d76r9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3616e33832e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:26.404961 containerd[1664]: 2025-01-17 12:16:26.353 [INFO][4827] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Namespace="kube-system" Pod="coredns-76f75df574-d76r9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:26.404961 containerd[1664]: 2025-01-17 12:16:26.353 [INFO][4827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3616e33832e ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Namespace="kube-system" Pod="coredns-76f75df574-d76r9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:26.404961 containerd[1664]: 2025-01-17 12:16:26.355 [INFO][4827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Namespace="kube-system" Pod="coredns-76f75df574-d76r9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:26.404961 containerd[1664]: 2025-01-17 12:16:26.356 [INFO][4827] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Namespace="kube-system" Pod="coredns-76f75df574-d76r9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--d76r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--d76r9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a1e52c8a-911e-4407-972d-34d4d2855eaf", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188", Pod:"coredns-76f75df574-d76r9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3616e33832e", MAC:"82:4c:cf:f4:59:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:26.404961 containerd[1664]: 2025-01-17 12:16:26.374 [INFO][4827] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188" Namespace="kube-system" Pod="coredns-76f75df574-d76r9" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:26.448310 containerd[1664]: time="2025-01-17T12:16:26.448115433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:16:26.448310 containerd[1664]: time="2025-01-17T12:16:26.448203283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:16:26.448505 containerd[1664]: time="2025-01-17T12:16:26.448435812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:26.449072 containerd[1664]: time="2025-01-17T12:16:26.449033249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:26.475910 systemd-resolved[1545]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:16:26.519945 containerd[1664]: time="2025-01-17T12:16:26.519743613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d76r9,Uid:a1e52c8a-911e-4407-972d-34d4d2855eaf,Namespace:kube-system,Attempt:1,} returns sandbox id \"dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188\"" Jan 17 12:16:26.523000 containerd[1664]: time="2025-01-17T12:16:26.522910021Z" level=info msg="CreateContainer within sandbox \"dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:16:26.717420 containerd[1664]: time="2025-01-17T12:16:26.674756446Z" level=info msg="StopPodSandbox for \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\"" Jan 17 12:16:26.738061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054921608.mount: Deactivated successfully. Jan 17 12:16:26.795525 containerd[1664]: time="2025-01-17T12:16:26.795500138Z" level=info msg="CreateContainer within sandbox \"dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1222136ea93f5dc09f4ddb5ef70a5d98cfa798a35aa70460e90df4ca142c9299\"" Jan 17 12:16:26.796803 containerd[1664]: time="2025-01-17T12:16:26.796728336Z" level=info msg="StartContainer for \"1222136ea93f5dc09f4ddb5ef70a5d98cfa798a35aa70460e90df4ca142c9299\"" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.792 [INFO][4917] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.793 [INFO][4917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" iface="eth0" netns="/var/run/netns/cni-fe3b8581-30b3-a907-1676-1d06044821e4" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.793 [INFO][4917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" iface="eth0" netns="/var/run/netns/cni-fe3b8581-30b3-a907-1676-1d06044821e4" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.793 [INFO][4917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" iface="eth0" netns="/var/run/netns/cni-fe3b8581-30b3-a907-1676-1d06044821e4" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.793 [INFO][4917] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.793 [INFO][4917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.832 [INFO][4925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" HandleID="k8s-pod-network.cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.832 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.832 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.836 [WARNING][4925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" HandleID="k8s-pod-network.cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.836 [INFO][4925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" HandleID="k8s-pod-network.cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.837 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:26.842839 containerd[1664]: 2025-01-17 12:16:26.839 [INFO][4917] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:26.853757 containerd[1664]: time="2025-01-17T12:16:26.843191207Z" level=info msg="TearDown network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\" successfully" Jan 17 12:16:26.853757 containerd[1664]: time="2025-01-17T12:16:26.843215484Z" level=info msg="StopPodSandbox for \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\" returns successfully" Jan 17 12:16:26.853757 containerd[1664]: time="2025-01-17T12:16:26.843708012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vq7j,Uid:280d2533-efae-41ef-91aa-7697f939417d,Namespace:calico-system,Attempt:1,}" Jan 17 12:16:26.927587 containerd[1664]: time="2025-01-17T12:16:26.927563406Z" level=info msg="StartContainer for \"1222136ea93f5dc09f4ddb5ef70a5d98cfa798a35aa70460e90df4ca142c9299\" returns successfully" Jan 17 12:16:27.184612 systemd[1]: run-netns-cni\x2dfe3b8581\x2d30b3\x2da907\x2d1676\x2d1d06044821e4.mount: Deactivated successfully. Jan 17 12:16:27.426629 containerd[1664]: time="2025-01-17T12:16:27.425805828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:27.426629 containerd[1664]: time="2025-01-17T12:16:27.426367794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:16:27.426629 containerd[1664]: time="2025-01-17T12:16:27.426582089Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:27.429170 containerd[1664]: time="2025-01-17T12:16:27.429146682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:27.430928 containerd[1664]: time="2025-01-17T12:16:27.430907719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.1806548s" Jan 17 12:16:27.430997 containerd[1664]: time="2025-01-17T12:16:27.430985714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:16:27.431937 containerd[1664]: time="2025-01-17T12:16:27.431915992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:16:27.436117 containerd[1664]: time="2025-01-17T12:16:27.436034619Z" level=info msg="CreateContainer within sandbox \"8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:16:27.451470 containerd[1664]: time="2025-01-17T12:16:27.451446866Z" level=info msg="CreateContainer within sandbox \"8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"16c89bdac2b87f5b5f15904da728e25b47f11445e69f1745df07a9f9369f54e8\"" Jan 17 12:16:27.452794 containerd[1664]: time="2025-01-17T12:16:27.452766959Z" level=info msg="StartContainer for \"16c89bdac2b87f5b5f15904da728e25b47f11445e69f1745df07a9f9369f54e8\"" Jan 17 12:16:27.570313 systemd-networkd[1293]: cali4b4bab6ac29: Link UP Jan 17 12:16:27.572859 systemd-networkd[1293]: cali4b4bab6ac29: Gained carrier Jan 17 12:16:27.580903 containerd[1664]: time="2025-01-17T12:16:27.577712855Z" level=info msg="StartContainer for \"16c89bdac2b87f5b5f15904da728e25b47f11445e69f1745df07a9f9369f54e8\" returns successfully" Jan 17 12:16:27.599700 kubelet[2993]: I0117 12:16:27.599666 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-d76r9" podStartSLOduration=36.599635051 podStartE2EDuration="36.599635051s" podCreationTimestamp="2025-01-17 12:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:16:26.964199455 +0000 UTC m=+50.477307494" watchObservedRunningTime="2025-01-17 12:16:27.599635051 +0000 UTC m=+51.112743074" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.464 [INFO][4970] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5vq7j-eth0 csi-node-driver- calico-system 280d2533-efae-41ef-91aa-7697f939417d 792 0 2025-01-17 12:15:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5vq7j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4b4bab6ac29 [] []}} ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Namespace="calico-system" Pod="csi-node-driver-5vq7j" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vq7j-" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.464 [INFO][4970] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Namespace="calico-system" Pod="csi-node-driver-5vq7j" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.520 [INFO][5009] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" HandleID="k8s-pod-network.73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.526 [INFO][5009] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" HandleID="k8s-pod-network.73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5vq7j", "timestamp":"2025-01-17 12:16:27.520839203 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.526 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.526 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.526 [INFO][5009] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.532 [INFO][5009] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" host="localhost" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.535 [INFO][5009] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.537 [INFO][5009] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.538 [INFO][5009] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.539 [INFO][5009] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.539 [INFO][5009] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" host="localhost" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.540 [INFO][5009] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.553 [INFO][5009] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" host="localhost" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.560 [INFO][5009] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" host="localhost" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.560 [INFO][5009] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" host="localhost" Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.560 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:27.617094 containerd[1664]: 2025-01-17 12:16:27.560 [INFO][5009] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" HandleID="k8s-pod-network.73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:27.617968 containerd[1664]: 2025-01-17 12:16:27.563 [INFO][4970] cni-plugin/k8s.go 386: Populated endpoint ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Namespace="calico-system" Pod="csi-node-driver-5vq7j" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vq7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5vq7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"280d2533-efae-41ef-91aa-7697f939417d", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5vq7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b4bab6ac29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:27.617968 containerd[1664]: 2025-01-17 12:16:27.563 [INFO][4970] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Namespace="calico-system" Pod="csi-node-driver-5vq7j" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:27.617968 containerd[1664]: 2025-01-17 12:16:27.563 [INFO][4970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b4bab6ac29 ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Namespace="calico-system" Pod="csi-node-driver-5vq7j" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:27.617968 containerd[1664]: 2025-01-17 12:16:27.571 [INFO][4970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Namespace="calico-system" Pod="csi-node-driver-5vq7j" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:27.617968 containerd[1664]: 2025-01-17 12:16:27.575 [INFO][4970] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Namespace="calico-system" Pod="csi-node-driver-5vq7j" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vq7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5vq7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"280d2533-efae-41ef-91aa-7697f939417d", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f", Pod:"csi-node-driver-5vq7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b4bab6ac29", MAC:"1e:4f:15:71:35:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:27.617968 containerd[1664]: 2025-01-17 12:16:27.615 [INFO][4970] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f" Namespace="calico-system" Pod="csi-node-driver-5vq7j" WorkloadEndpoint="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:27.677281 containerd[1664]: time="2025-01-17T12:16:27.676922552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:16:27.677281 containerd[1664]: time="2025-01-17T12:16:27.676999909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:16:27.677281 containerd[1664]: time="2025-01-17T12:16:27.677022997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:27.678936 containerd[1664]: time="2025-01-17T12:16:27.677668924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:27.701768 systemd-resolved[1545]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:16:27.712810 containerd[1664]: time="2025-01-17T12:16:27.711713447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vq7j,Uid:280d2533-efae-41ef-91aa-7697f939417d,Namespace:calico-system,Attempt:1,} returns sandbox id \"73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f\"" Jan 17 12:16:27.957930 systemd-networkd[1293]: cali3616e33832e: Gained IPv6LL Jan 17 12:16:27.993719 kubelet[2993]: I0117 12:16:27.993321 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-677457d5b4-vpvf6" podStartSLOduration=25.810426787 podStartE2EDuration="29.993292713s" podCreationTimestamp="2025-01-17 12:15:58 +0000 UTC" firstStartedPulling="2025-01-17 12:16:23.248627593 +0000 UTC m=+46.761735612" lastFinishedPulling="2025-01-17 12:16:27.43149351 +0000 UTC m=+50.944601538" observedRunningTime="2025-01-17 12:16:27.964602159 +0000 UTC m=+51.477710196" watchObservedRunningTime="2025-01-17 12:16:27.993292713 +0000 UTC m=+51.506400731" Jan 17 12:16:28.219042 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:16:28.213863 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:16:28.213887 systemd-resolved[1545]: Flushed all caches. Jan 17 12:16:28.663278 systemd-networkd[1293]: cali4b4bab6ac29: Gained IPv6LL Jan 17 12:16:28.935845 kubelet[2993]: I0117 12:16:28.935760 2993 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:16:30.261948 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:16:30.283406 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:16:30.261965 systemd-resolved[1545]: Flushed all caches. Jan 17 12:16:30.324024 containerd[1664]: time="2025-01-17T12:16:30.323993723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:30.324724 containerd[1664]: time="2025-01-17T12:16:30.324659161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:16:30.324968 containerd[1664]: time="2025-01-17T12:16:30.324954254Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:30.326088 containerd[1664]: time="2025-01-17T12:16:30.326072779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:30.326727 containerd[1664]: time="2025-01-17T12:16:30.326709625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.89399152s" Jan 17 12:16:30.326751 containerd[1664]: time="2025-01-17T12:16:30.326727584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:16:30.327132 containerd[1664]: time="2025-01-17T12:16:30.327050409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:16:30.354222 containerd[1664]: time="2025-01-17T12:16:30.354197944Z" level=info msg="CreateContainer within sandbox \"e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:16:30.360317 containerd[1664]: time="2025-01-17T12:16:30.359847176Z" level=info msg="CreateContainer within sandbox \"e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1c1ef13eb156620346392b7dedffe32425adecc7a0b5cbbae59a6ddddf28b7b8\"" Jan 17 12:16:30.360731 containerd[1664]: time="2025-01-17T12:16:30.360716184Z" level=info msg="StartContainer for \"1c1ef13eb156620346392b7dedffe32425adecc7a0b5cbbae59a6ddddf28b7b8\"" Jan 17 12:16:30.453654 containerd[1664]: time="2025-01-17T12:16:30.453631558Z" level=info msg="StartContainer for \"1c1ef13eb156620346392b7dedffe32425adecc7a0b5cbbae59a6ddddf28b7b8\" returns successfully" Jan 17 12:16:30.701367 containerd[1664]: time="2025-01-17T12:16:30.701336403Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:30.703996 containerd[1664]: time="2025-01-17T12:16:30.701885695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:16:30.705317 containerd[1664]: time="2025-01-17T12:16:30.705264135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 378.197617ms" Jan 17 12:16:30.705317 containerd[1664]: time="2025-01-17T12:16:30.705284113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:16:30.705717 containerd[1664]: time="2025-01-17T12:16:30.705700600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:16:30.706742 containerd[1664]: time="2025-01-17T12:16:30.706663832Z" level=info msg="CreateContainer within sandbox \"a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:16:30.725489 containerd[1664]: time="2025-01-17T12:16:30.725430828Z" level=info msg="CreateContainer within sandbox \"a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"735b931ae5b8c3c1752fd1b89ceac80720f296f72e0ae1fab4543b069f5d7f59\"" Jan 17 12:16:30.726150 containerd[1664]: time="2025-01-17T12:16:30.725937106Z" level=info msg="StartContainer for \"735b931ae5b8c3c1752fd1b89ceac80720f296f72e0ae1fab4543b069f5d7f59\"" Jan 17 12:16:30.777704 containerd[1664]: time="2025-01-17T12:16:30.777678694Z" level=info msg="StartContainer for \"735b931ae5b8c3c1752fd1b89ceac80720f296f72e0ae1fab4543b069f5d7f59\" returns successfully" Jan 17 12:16:30.954149 kubelet[2993]: I0117 12:16:30.953971 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-664db8f47c-wdfrk" podStartSLOduration=24.886369594 podStartE2EDuration="31.953847739s" podCreationTimestamp="2025-01-17 12:15:59 +0000 UTC" firstStartedPulling="2025-01-17 12:16:23.259477729 +0000 UTC m=+46.772585748" lastFinishedPulling="2025-01-17 12:16:30.326955873 +0000 UTC m=+53.840063893" observedRunningTime="2025-01-17 12:16:30.949716618 +0000 UTC m=+54.462824641" watchObservedRunningTime="2025-01-17 12:16:30.953847739 +0000 UTC m=+54.466955762" Jan 17 12:16:31.014044 kubelet[2993]: I0117 12:16:31.013962 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-677457d5b4-6qjzb" podStartSLOduration=25.573463713 podStartE2EDuration="33.013926085s" podCreationTimestamp="2025-01-17 12:15:58 +0000 UTC" firstStartedPulling="2025-01-17 12:16:23.265053866 +0000 UTC m=+46.778161885" lastFinishedPulling="2025-01-17 12:16:30.705516227 +0000 UTC m=+54.218624257" observedRunningTime="2025-01-17 12:16:30.960504508 +0000 UTC m=+54.473612536" watchObservedRunningTime="2025-01-17 12:16:31.013926085 +0000 UTC m=+54.527034113" Jan 17 12:16:31.943358 kubelet[2993]: I0117 12:16:31.943111 2993 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:16:32.775024 containerd[1664]: time="2025-01-17T12:16:32.774989960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:32.775601 containerd[1664]: time="2025-01-17T12:16:32.775571313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:16:32.776425 containerd[1664]: time="2025-01-17T12:16:32.775843988Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:32.777669 containerd[1664]: time="2025-01-17T12:16:32.777636265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.071915392s" Jan 17 12:16:32.777728 containerd[1664]: time="2025-01-17T12:16:32.777671436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:16:32.777890 containerd[1664]: time="2025-01-17T12:16:32.777872004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:32.780464 containerd[1664]: time="2025-01-17T12:16:32.780440918Z" level=info msg="CreateContainer within sandbox \"73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:16:32.793824 containerd[1664]: time="2025-01-17T12:16:32.792126811Z" level=info msg="CreateContainer within sandbox \"73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"439e94973d1d4abf2e31287661684d92a92250f0723eb24d1fc399996fa70eda\"" Jan 17 12:16:32.794275 containerd[1664]: time="2025-01-17T12:16:32.794242190Z" level=info msg="StartContainer for \"439e94973d1d4abf2e31287661684d92a92250f0723eb24d1fc399996fa70eda\"" Jan 17 12:16:32.837826 containerd[1664]: time="2025-01-17T12:16:32.837366511Z" level=info msg="StartContainer for \"439e94973d1d4abf2e31287661684d92a92250f0723eb24d1fc399996fa70eda\" returns successfully" Jan 17 12:16:32.838247 containerd[1664]: time="2025-01-17T12:16:32.838228319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:16:34.544640 containerd[1664]: time="2025-01-17T12:16:34.542260311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:34.544640 containerd[1664]: time="2025-01-17T12:16:34.542899837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:16:34.544640 containerd[1664]: time="2025-01-17T12:16:34.543266618Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:34.545740 containerd[1664]: time="2025-01-17T12:16:34.545376943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:34.545740 containerd[1664]: time="2025-01-17T12:16:34.545689006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.707439908s" Jan 17 12:16:34.545740 containerd[1664]: time="2025-01-17T12:16:34.545711896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:16:34.548226 containerd[1664]: time="2025-01-17T12:16:34.548097201Z" level=info msg="CreateContainer within sandbox \"73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:16:34.581067 containerd[1664]: time="2025-01-17T12:16:34.581030874Z" level=info msg="CreateContainer within sandbox \"73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ed50f71f7256e3cd99ced0838a9520e807d99c11e64fac8c3a019365b176f8f4\"" Jan 17 12:16:34.581845 containerd[1664]: time="2025-01-17T12:16:34.581427782Z" level=info msg="StartContainer for \"ed50f71f7256e3cd99ced0838a9520e807d99c11e64fac8c3a019365b176f8f4\"" Jan 17 12:16:34.647230 containerd[1664]: time="2025-01-17T12:16:34.647203673Z" level=info msg="StartContainer for \"ed50f71f7256e3cd99ced0838a9520e807d99c11e64fac8c3a019365b176f8f4\" returns successfully" Jan 17 12:16:35.078017 kubelet[2993]: I0117 12:16:35.077917 2993 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:16:35.082657 kubelet[2993]: I0117 12:16:35.082580 2993 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:16:36.695643 containerd[1664]: time="2025-01-17T12:16:36.695614669Z" level=info msg="StopPodSandbox for \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\"" Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.888 [WARNING][5281] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5vq7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"280d2533-efae-41ef-91aa-7697f939417d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f", Pod:"csi-node-driver-5vq7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b4bab6ac29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.889 [INFO][5281] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.889 [INFO][5281] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" iface="eth0" netns="" Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.889 [INFO][5281] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.889 [INFO][5281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.915 [INFO][5287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" HandleID="k8s-pod-network.cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.915 [INFO][5287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.915 [INFO][5287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.918 [WARNING][5287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" HandleID="k8s-pod-network.cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.918 [INFO][5287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" HandleID="k8s-pod-network.cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.919 [INFO][5287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:36.926676 containerd[1664]: 2025-01-17 12:16:36.925 [INFO][5281] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:36.926676 containerd[1664]: time="2025-01-17T12:16:36.926534548Z" level=info msg="TearDown network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\" successfully" Jan 17 12:16:36.926676 containerd[1664]: time="2025-01-17T12:16:36.926551440Z" level=info msg="StopPodSandbox for \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\" returns successfully" Jan 17 12:16:36.971657 containerd[1664]: time="2025-01-17T12:16:36.971520108Z" level=info msg="RemovePodSandbox for \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\"" Jan 17 12:16:36.972877 containerd[1664]: time="2025-01-17T12:16:36.972856403Z" level=info msg="Forcibly stopping sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\"" Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.057 [WARNING][5306] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5vq7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"280d2533-efae-41ef-91aa-7697f939417d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73a9dcffc40871a0e3c23e916761a556db96d6b0c428efda9891ef85743f915f", Pod:"csi-node-driver-5vq7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b4bab6ac29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.058 [INFO][5306] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.058 [INFO][5306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" iface="eth0" netns="" Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.058 [INFO][5306] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.058 [INFO][5306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.074 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" HandleID="k8s-pod-network.cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.074 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.074 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.082 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" HandleID="k8s-pod-network.cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.082 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" HandleID="k8s-pod-network.cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Workload="localhost-k8s-csi--node--driver--5vq7j-eth0" Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.083 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.087779 containerd[1664]: 2025-01-17 12:16:37.085 [INFO][5306] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a" Jan 17 12:16:37.104312 containerd[1664]: time="2025-01-17T12:16:37.087826166Z" level=info msg="TearDown network for sandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\" successfully" Jan 17 12:16:37.115804 containerd[1664]: time="2025-01-17T12:16:37.115733426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:16:37.146758 containerd[1664]: time="2025-01-17T12:16:37.146708147Z" level=info msg="RemovePodSandbox \"cad1055e5af48ebd0d32a08c81460380bdc2786891fe7e48cf127cdfa254918a\" returns successfully" Jan 17 12:16:37.153457 containerd[1664]: time="2025-01-17T12:16:37.153255889Z" level=info msg="StopPodSandbox for \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\"" Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.198 [WARNING][5332] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0", GenerateName:"calico-apiserver-677457d5b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f0e3b7e-83db-40ec-a4a9-0422da13ad74", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677457d5b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5", Pod:"calico-apiserver-677457d5b4-vpvf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89f7a6e8440", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.198 [INFO][5332] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.198 [INFO][5332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" iface="eth0" netns="" Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.198 [INFO][5332] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.198 [INFO][5332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.216 [INFO][5338] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" HandleID="k8s-pod-network.41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.216 [INFO][5338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.216 [INFO][5338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.220 [WARNING][5338] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" HandleID="k8s-pod-network.41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.220 [INFO][5338] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" HandleID="k8s-pod-network.41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.221 [INFO][5338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.224022 containerd[1664]: 2025-01-17 12:16:37.222 [INFO][5332] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:37.224022 containerd[1664]: time="2025-01-17T12:16:37.223925238Z" level=info msg="TearDown network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\" successfully" Jan 17 12:16:37.224022 containerd[1664]: time="2025-01-17T12:16:37.223942385Z" level=info msg="StopPodSandbox for \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\" returns successfully" Jan 17 12:16:37.235048 containerd[1664]: time="2025-01-17T12:16:37.224287436Z" level=info msg="RemovePodSandbox for \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\"" Jan 17 12:16:37.235048 containerd[1664]: time="2025-01-17T12:16:37.224304956Z" level=info msg="Forcibly stopping sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\"" Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.266 [WARNING][5357] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0", GenerateName:"calico-apiserver-677457d5b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f0e3b7e-83db-40ec-a4a9-0422da13ad74", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677457d5b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8efc6cf636ef11584aa2c90389015191324cdb2d36276e4b42f294a46d685cc5", Pod:"calico-apiserver-677457d5b4-vpvf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89f7a6e8440", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.266 [INFO][5357] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.266 [INFO][5357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" iface="eth0" netns="" Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.266 [INFO][5357] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.266 [INFO][5357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.282 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" HandleID="k8s-pod-network.41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.282 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.282 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.287 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" HandleID="k8s-pod-network.41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.287 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" HandleID="k8s-pod-network.41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Workload="localhost-k8s-calico--apiserver--677457d5b4--vpvf6-eth0" Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.288 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.290107 containerd[1664]: 2025-01-17 12:16:37.289 [INFO][5357] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458" Jan 17 12:16:37.293431 containerd[1664]: time="2025-01-17T12:16:37.290121335Z" level=info msg="TearDown network for sandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\" successfully" Jan 17 12:16:37.313470 containerd[1664]: time="2025-01-17T12:16:37.313438692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:16:37.313579 containerd[1664]: time="2025-01-17T12:16:37.313486886Z" level=info msg="RemovePodSandbox \"41c4b8cf024abfbeb588c834907fa32fff7d93f01387f6445e6dddbbd8ddb458\" returns successfully" Jan 17 12:16:37.313974 containerd[1664]: time="2025-01-17T12:16:37.313883123Z" level=info msg="StopPodSandbox for \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\"" Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.343 [WARNING][5381] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--d76r9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a1e52c8a-911e-4407-972d-34d4d2855eaf", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188", Pod:"coredns-76f75df574-d76r9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3616e33832e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.343 [INFO][5381] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.343 [INFO][5381] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" iface="eth0" netns="" Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.343 [INFO][5381] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.343 [INFO][5381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.359 [INFO][5387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" HandleID="k8s-pod-network.f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.359 [INFO][5387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.359 [INFO][5387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.363 [WARNING][5387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" HandleID="k8s-pod-network.f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.363 [INFO][5387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" HandleID="k8s-pod-network.f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.364 [INFO][5387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.367010 containerd[1664]: 2025-01-17 12:16:37.366 [INFO][5381] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:37.367916 containerd[1664]: time="2025-01-17T12:16:37.367049046Z" level=info msg="TearDown network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\" successfully" Jan 17 12:16:37.367916 containerd[1664]: time="2025-01-17T12:16:37.367084631Z" level=info msg="StopPodSandbox for \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\" returns successfully" Jan 17 12:16:37.367916 containerd[1664]: time="2025-01-17T12:16:37.367370543Z" level=info msg="RemovePodSandbox for \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\"" Jan 17 12:16:37.367916 containerd[1664]: time="2025-01-17T12:16:37.367385559Z" level=info msg="Forcibly stopping sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\"" Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.397 [WARNING][5405] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--d76r9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a1e52c8a-911e-4407-972d-34d4d2855eaf", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcce1c60402b5ad3296633919899b490e67295694d3597a3d49ee4379ce01188", Pod:"coredns-76f75df574-d76r9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3616e33832e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.397 [INFO][5405] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.397 [INFO][5405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" iface="eth0" netns="" Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.397 [INFO][5405] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.397 [INFO][5405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.411 [INFO][5411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" HandleID="k8s-pod-network.f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.411 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.411 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.415 [WARNING][5411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" HandleID="k8s-pod-network.f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.415 [INFO][5411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" HandleID="k8s-pod-network.f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Workload="localhost-k8s-coredns--76f75df574--d76r9-eth0" Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.416 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.418338 containerd[1664]: 2025-01-17 12:16:37.417 [INFO][5405] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c" Jan 17 12:16:37.418679 containerd[1664]: time="2025-01-17T12:16:37.418367983Z" level=info msg="TearDown network for sandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\" successfully" Jan 17 12:16:37.444426 containerd[1664]: time="2025-01-17T12:16:37.444391939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:16:37.447161 containerd[1664]: time="2025-01-17T12:16:37.444443358Z" level=info msg="RemovePodSandbox \"f1a7374f09ee253c667eb3d3b76aa12ff4415371ae194101f46e780a7514695c\" returns successfully" Jan 17 12:16:37.447161 containerd[1664]: time="2025-01-17T12:16:37.444871731Z" level=info msg="StopPodSandbox for \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\"" Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.471 [WARNING][5429] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f6xfs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f48483d2-723e-4d57-bb35-8d26869637fa", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4", Pod:"coredns-76f75df574-f6xfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fabc9f08ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.471 [INFO][5429] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.471 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" iface="eth0" netns="" Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.471 [INFO][5429] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.471 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.484 [INFO][5436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" HandleID="k8s-pod-network.6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.484 [INFO][5436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.484 [INFO][5436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.487 [WARNING][5436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" HandleID="k8s-pod-network.6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.487 [INFO][5436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" HandleID="k8s-pod-network.6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.488 [INFO][5436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.490336 containerd[1664]: 2025-01-17 12:16:37.489 [INFO][5429] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:37.490926 containerd[1664]: time="2025-01-17T12:16:37.490549106Z" level=info msg="TearDown network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\" successfully" Jan 17 12:16:37.490926 containerd[1664]: time="2025-01-17T12:16:37.490570152Z" level=info msg="StopPodSandbox for \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\" returns successfully" Jan 17 12:16:37.491542 containerd[1664]: time="2025-01-17T12:16:37.491524752Z" level=info msg="RemovePodSandbox for \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\"" Jan 17 12:16:37.491577 containerd[1664]: time="2025-01-17T12:16:37.491544690Z" level=info msg="Forcibly stopping sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\"" Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.511 [WARNING][5454] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--f6xfs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f48483d2-723e-4d57-bb35-8d26869637fa", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"330fe67ffcec108cbc8b40f1cfa84b61c79b763ed2ef5281196cecb43a4961d4", Pod:"coredns-76f75df574-f6xfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fabc9f08ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.511 [INFO][5454] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.511 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" iface="eth0" netns="" Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.511 [INFO][5454] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.511 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.524 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" HandleID="k8s-pod-network.6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.524 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.524 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.528 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" HandleID="k8s-pod-network.6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.528 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" HandleID="k8s-pod-network.6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Workload="localhost-k8s-coredns--76f75df574--f6xfs-eth0" Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.529 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.531324 containerd[1664]: 2025-01-17 12:16:37.530 [INFO][5454] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a" Jan 17 12:16:37.531656 containerd[1664]: time="2025-01-17T12:16:37.531378516Z" level=info msg="TearDown network for sandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\" successfully" Jan 17 12:16:37.546884 containerd[1664]: time="2025-01-17T12:16:37.546866158Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:16:37.546945 containerd[1664]: time="2025-01-17T12:16:37.546898376Z" level=info msg="RemovePodSandbox \"6775cb3f13dc4027a0b6eec7e908280481357fa2c718189298495a0349d6156a\" returns successfully" Jan 17 12:16:37.547243 containerd[1664]: time="2025-01-17T12:16:37.547201932Z" level=info msg="StopPodSandbox for \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\"" Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.573 [WARNING][5478] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0", GenerateName:"calico-apiserver-677457d5b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"df8c74b1-88c2-467d-afc5-74596a44f7fb", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677457d5b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da", Pod:"calico-apiserver-677457d5b4-6qjzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35746c1d484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.573 [INFO][5478] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.573 [INFO][5478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" iface="eth0" netns="" Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.573 [INFO][5478] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.573 [INFO][5478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.585 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" HandleID="k8s-pod-network.61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.585 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.585 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.589 [WARNING][5484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" HandleID="k8s-pod-network.61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.589 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" HandleID="k8s-pod-network.61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.590 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.592107 containerd[1664]: 2025-01-17 12:16:37.590 [INFO][5478] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:37.596014 containerd[1664]: time="2025-01-17T12:16:37.592107720Z" level=info msg="TearDown network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\" successfully" Jan 17 12:16:37.596014 containerd[1664]: time="2025-01-17T12:16:37.592125622Z" level=info msg="StopPodSandbox for \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\" returns successfully" Jan 17 12:16:37.596014 containerd[1664]: time="2025-01-17T12:16:37.592700637Z" level=info msg="RemovePodSandbox for \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\"" Jan 17 12:16:37.596014 containerd[1664]: time="2025-01-17T12:16:37.592720206Z" level=info msg="Forcibly stopping sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\"" Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.613 [WARNING][5502] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0", GenerateName:"calico-apiserver-677457d5b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"df8c74b1-88c2-467d-afc5-74596a44f7fb", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677457d5b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1a17f8b939651aa99a069b1d6a27227776785101c2f56f5e78201d2f0b840da", Pod:"calico-apiserver-677457d5b4-6qjzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35746c1d484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.613 [INFO][5502] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.613 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" iface="eth0" netns="" Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.613 [INFO][5502] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.613 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.627 [INFO][5508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" HandleID="k8s-pod-network.61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.627 [INFO][5508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.627 [INFO][5508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.630 [WARNING][5508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" HandleID="k8s-pod-network.61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.630 [INFO][5508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" HandleID="k8s-pod-network.61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Workload="localhost-k8s-calico--apiserver--677457d5b4--6qjzb-eth0" Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.631 [INFO][5508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.633218 containerd[1664]: 2025-01-17 12:16:37.632 [INFO][5502] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3" Jan 17 12:16:37.633609 containerd[1664]: time="2025-01-17T12:16:37.633565753Z" level=info msg="TearDown network for sandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\" successfully" Jan 17 12:16:37.660276 containerd[1664]: time="2025-01-17T12:16:37.660251490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:16:37.660335 containerd[1664]: time="2025-01-17T12:16:37.660292438Z" level=info msg="RemovePodSandbox \"61417b5dfce047701b0c7be08afecfaced7020455239867c67696111e326efc3\" returns successfully" Jan 17 12:16:37.660812 containerd[1664]: time="2025-01-17T12:16:37.660621218Z" level=info msg="StopPodSandbox for \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\"" Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.701 [WARNING][5526] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0", GenerateName:"calico-kube-controllers-664db8f47c-", Namespace:"calico-system", SelfLink:"", UID:"55894bf0-6b41-4650-8279-8e544ed121c2", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664db8f47c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf", Pod:"calico-kube-controllers-664db8f47c-wdfrk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eee4e4e211", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.702 [INFO][5526] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.702 [INFO][5526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" iface="eth0" netns="" Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.702 [INFO][5526] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.702 [INFO][5526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.727 [INFO][5532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" HandleID="k8s-pod-network.fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.727 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.727 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.742 [WARNING][5532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" HandleID="k8s-pod-network.fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.742 [INFO][5532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" HandleID="k8s-pod-network.fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.743 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.745568 containerd[1664]: 2025-01-17 12:16:37.744 [INFO][5526] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:37.746648 containerd[1664]: time="2025-01-17T12:16:37.746141838Z" level=info msg="TearDown network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\" successfully" Jan 17 12:16:37.746648 containerd[1664]: time="2025-01-17T12:16:37.746326015Z" level=info msg="StopPodSandbox for \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\" returns successfully" Jan 17 12:16:37.746966 containerd[1664]: time="2025-01-17T12:16:37.746758698Z" level=info msg="RemovePodSandbox for \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\"" Jan 17 12:16:37.746966 containerd[1664]: time="2025-01-17T12:16:37.746773933Z" level=info msg="Forcibly stopping sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\"" Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.780 [WARNING][5551] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0", GenerateName:"calico-kube-controllers-664db8f47c-", Namespace:"calico-system", SelfLink:"", UID:"55894bf0-6b41-4650-8279-8e544ed121c2", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664db8f47c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e36d7e21b9108943f3beb9e686caa26a474c9b06c9f977ef56a53fed8d6b0bcf", Pod:"calico-kube-controllers-664db8f47c-wdfrk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eee4e4e211", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.780 [INFO][5551] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.780 [INFO][5551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" iface="eth0" netns="" Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.780 [INFO][5551] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.780 [INFO][5551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.799 [INFO][5557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" HandleID="k8s-pod-network.fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.799 [INFO][5557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.799 [INFO][5557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.802 [WARNING][5557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" HandleID="k8s-pod-network.fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.802 [INFO][5557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" HandleID="k8s-pod-network.fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Workload="localhost-k8s-calico--kube--controllers--664db8f47c--wdfrk-eth0" Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.803 [INFO][5557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:16:37.804963 containerd[1664]: 2025-01-17 12:16:37.804 [INFO][5551] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0" Jan 17 12:16:37.809800 containerd[1664]: time="2025-01-17T12:16:37.804984985Z" level=info msg="TearDown network for sandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\" successfully" Jan 17 12:16:37.815474 containerd[1664]: time="2025-01-17T12:16:37.815457652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:16:37.815516 containerd[1664]: time="2025-01-17T12:16:37.815490389Z" level=info msg="RemovePodSandbox \"fb164a7ac21201f7dbc8e4d561aec797244cf5042bd16915250e6ddc0aadbce0\" returns successfully" Jan 17 12:16:41.286467 kubelet[2993]: I0117 12:16:41.286217 2993 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:16:41.369745 kubelet[2993]: I0117 12:16:41.369232 2993 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-5vq7j" podStartSLOduration=35.535388892 podStartE2EDuration="42.369197515s" podCreationTimestamp="2025-01-17 12:15:59 +0000 UTC" firstStartedPulling="2025-01-17 12:16:27.712444999 +0000 UTC m=+51.225553019" lastFinishedPulling="2025-01-17 12:16:34.546253614 +0000 UTC m=+58.059361642" observedRunningTime="2025-01-17 12:16:34.986309787 +0000 UTC m=+58.499417815" watchObservedRunningTime="2025-01-17 12:16:41.369197515 +0000 UTC m=+64.882305535" Jan 17 12:16:47.549376 kubelet[2993]: I0117 12:16:47.549033 2993 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:16:51.427271 systemd[1]: Started sshd@7-139.178.70.104:22-147.75.109.163:51602.service - OpenSSH per-connection server daemon (147.75.109.163:51602). Jan 17 12:16:51.666476 sshd[5620]: Accepted publickey for core from 147.75.109.163 port 51602 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:16:51.668410 sshd[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:16:51.677057 systemd-logind[1630]: New session 10 of user core. Jan 17 12:16:51.681104 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:16:52.334914 sshd[5620]: pam_unix(sshd:session): session closed for user core Jan 17 12:16:52.337335 systemd[1]: sshd@7-139.178.70.104:22-147.75.109.163:51602.service: Deactivated successfully. Jan 17 12:16:52.344512 systemd-logind[1630]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:16:52.344728 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:16:52.345663 systemd-logind[1630]: Removed session 10. Jan 17 12:16:57.345050 systemd[1]: Started sshd@8-139.178.70.104:22-147.75.109.163:35254.service - OpenSSH per-connection server daemon (147.75.109.163:35254). Jan 17 12:16:57.373075 sshd[5643]: Accepted publickey for core from 147.75.109.163 port 35254 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:16:57.374054 sshd[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:16:57.377162 systemd-logind[1630]: New session 11 of user core. Jan 17 12:16:57.387375 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:16:57.508429 sshd[5643]: pam_unix(sshd:session): session closed for user core Jan 17 12:16:57.511179 systemd[1]: sshd@8-139.178.70.104:22-147.75.109.163:35254.service: Deactivated successfully. Jan 17 12:16:57.512547 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:16:57.512690 systemd-logind[1630]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:16:57.513650 systemd-logind[1630]: Removed session 11. Jan 17 12:17:02.521012 systemd[1]: Started sshd@9-139.178.70.104:22-147.75.109.163:35266.service - OpenSSH per-connection server daemon (147.75.109.163:35266). Jan 17 12:17:02.842816 sshd[5663]: Accepted publickey for core from 147.75.109.163 port 35266 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:02.857896 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:02.861231 systemd-logind[1630]: New session 12 of user core. Jan 17 12:17:02.864967 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:17:02.960905 sshd[5663]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:02.962435 systemd-logind[1630]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:17:02.962591 systemd[1]: sshd@9-139.178.70.104:22-147.75.109.163:35266.service: Deactivated successfully. Jan 17 12:17:02.965289 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:17:02.966004 systemd-logind[1630]: Removed session 12. Jan 17 12:17:07.970007 systemd[1]: Started sshd@10-139.178.70.104:22-147.75.109.163:33974.service - OpenSSH per-connection server daemon (147.75.109.163:33974). Jan 17 12:17:08.138985 sshd[5680]: Accepted publickey for core from 147.75.109.163 port 33974 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:08.142526 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:08.149818 systemd-logind[1630]: New session 13 of user core. Jan 17 12:17:08.151933 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:17:08.219850 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:17:08.215074 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:17:08.215102 systemd-resolved[1545]: Flushed all caches. Jan 17 12:17:08.334930 sshd[5680]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:08.344992 systemd[1]: Started sshd@11-139.178.70.104:22-147.75.109.163:33976.service - OpenSSH per-connection server daemon (147.75.109.163:33976). Jan 17 12:17:08.345522 systemd[1]: sshd@10-139.178.70.104:22-147.75.109.163:33974.service: Deactivated successfully. Jan 17 12:17:08.346851 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:17:08.348141 systemd-logind[1630]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:17:08.350862 systemd-logind[1630]: Removed session 13. Jan 17 12:17:08.375812 sshd[5693]: Accepted publickey for core from 147.75.109.163 port 33976 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:08.376816 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:08.379544 systemd-logind[1630]: New session 14 of user core. Jan 17 12:17:08.382937 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:17:08.564146 sshd[5693]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:08.568154 systemd[1]: sshd@11-139.178.70.104:22-147.75.109.163:33976.service: Deactivated successfully. Jan 17 12:17:08.575079 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:17:08.577306 systemd-logind[1630]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:17:08.585810 systemd[1]: Started sshd@12-139.178.70.104:22-147.75.109.163:33978.service - OpenSSH per-connection server daemon (147.75.109.163:33978). Jan 17 12:17:08.588273 systemd-logind[1630]: Removed session 14. Jan 17 12:17:08.623440 sshd[5710]: Accepted publickey for core from 147.75.109.163 port 33978 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:08.624280 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:08.626794 systemd-logind[1630]: New session 15 of user core. Jan 17 12:17:08.630001 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:17:08.739496 sshd[5710]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:08.741462 systemd[1]: sshd@12-139.178.70.104:22-147.75.109.163:33978.service: Deactivated successfully. Jan 17 12:17:08.743285 systemd-logind[1630]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:17:08.743558 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:17:08.744501 systemd-logind[1630]: Removed session 15. Jan 17 12:17:10.262239 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:17:10.262991 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:17:10.262245 systemd-resolved[1545]: Flushed all caches. Jan 17 12:17:13.745922 systemd[1]: Started sshd@13-139.178.70.104:22-147.75.109.163:33994.service - OpenSSH per-connection server daemon (147.75.109.163:33994). Jan 17 12:17:13.821202 sshd[5766]: Accepted publickey for core from 147.75.109.163 port 33994 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:13.822172 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:13.824899 systemd-logind[1630]: New session 16 of user core. Jan 17 12:17:13.830940 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:17:13.949495 sshd[5766]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:13.955945 systemd[1]: Started sshd@14-139.178.70.104:22-147.75.109.163:34008.service - OpenSSH per-connection server daemon (147.75.109.163:34008). Jan 17 12:17:13.956237 systemd[1]: sshd@13-139.178.70.104:22-147.75.109.163:33994.service: Deactivated successfully. Jan 17 12:17:13.958191 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:17:13.958946 systemd-logind[1630]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:17:13.960179 systemd-logind[1630]: Removed session 16. Jan 17 12:17:13.981499 sshd[5778]: Accepted publickey for core from 147.75.109.163 port 34008 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:13.982316 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:13.985477 systemd-logind[1630]: New session 17 of user core. Jan 17 12:17:13.992931 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:17:14.390353 sshd[5778]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:14.398966 systemd[1]: Started sshd@15-139.178.70.104:22-147.75.109.163:34012.service - OpenSSH per-connection server daemon (147.75.109.163:34012). Jan 17 12:17:14.399199 systemd[1]: sshd@14-139.178.70.104:22-147.75.109.163:34008.service: Deactivated successfully. Jan 17 12:17:14.403415 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:17:14.405515 systemd-logind[1630]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:17:14.407515 systemd-logind[1630]: Removed session 17. Jan 17 12:17:14.434446 sshd[5789]: Accepted publickey for core from 147.75.109.163 port 34012 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:14.438614 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:14.445951 systemd-logind[1630]: New session 18 of user core. Jan 17 12:17:14.452304 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:17:15.965019 sshd[5789]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:15.966931 systemd[1]: Started sshd@16-139.178.70.104:22-147.75.109.163:34014.service - OpenSSH per-connection server daemon (147.75.109.163:34014). Jan 17 12:17:15.982354 systemd[1]: sshd@15-139.178.70.104:22-147.75.109.163:34012.service: Deactivated successfully. Jan 17 12:17:15.984319 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:17:15.985085 systemd-logind[1630]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:17:15.986081 systemd-logind[1630]: Removed session 18. Jan 17 12:17:16.061802 sshd[5805]: Accepted publickey for core from 147.75.109.163 port 34014 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:16.062104 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:16.067910 systemd-logind[1630]: New session 19 of user core. Jan 17 12:17:16.070965 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:17:16.279582 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:17:16.279921 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:17:16.279589 systemd-resolved[1545]: Flushed all caches. Jan 17 12:17:16.398183 sshd[5805]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:16.404981 systemd[1]: Started sshd@17-139.178.70.104:22-147.75.109.163:34026.service - OpenSSH per-connection server daemon (147.75.109.163:34026). Jan 17 12:17:16.408437 systemd[1]: sshd@16-139.178.70.104:22-147.75.109.163:34014.service: Deactivated successfully. Jan 17 12:17:16.414897 systemd-logind[1630]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:17:16.415613 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:17:16.418447 systemd-logind[1630]: Removed session 19. Jan 17 12:17:16.456000 sshd[5821]: Accepted publickey for core from 147.75.109.163 port 34026 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:16.457904 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:16.462840 systemd-logind[1630]: New session 20 of user core. Jan 17 12:17:16.469221 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:17:16.630958 sshd[5821]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:16.633178 systemd[1]: sshd@17-139.178.70.104:22-147.75.109.163:34026.service: Deactivated successfully. Jan 17 12:17:16.635711 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:17:16.636817 systemd-logind[1630]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:17:16.638710 systemd-logind[1630]: Removed session 20. Jan 17 12:17:18.325982 systemd-resolved[1545]: Under memory pressure, flushing caches. Jan 17 12:17:18.326882 systemd-journald[1202]: Under memory pressure, flushing caches. Jan 17 12:17:18.325987 systemd-resolved[1545]: Flushed all caches. Jan 17 12:17:21.637973 systemd[1]: Started sshd@18-139.178.70.104:22-147.75.109.163:37514.service - OpenSSH per-connection server daemon (147.75.109.163:37514). Jan 17 12:17:21.677249 sshd[5841]: Accepted publickey for core from 147.75.109.163 port 37514 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:21.677955 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:21.680843 systemd-logind[1630]: New session 21 of user core. Jan 17 12:17:21.686021 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:17:21.815703 sshd[5841]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:21.818753 systemd-logind[1630]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:17:21.819505 systemd[1]: sshd@18-139.178.70.104:22-147.75.109.163:37514.service: Deactivated successfully. Jan 17 12:17:21.820690 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:17:21.821041 systemd-logind[1630]: Removed session 21. Jan 17 12:17:26.822011 systemd[1]: Started sshd@19-139.178.70.104:22-147.75.109.163:37526.service - OpenSSH per-connection server daemon (147.75.109.163:37526). Jan 17 12:17:26.847302 sshd[5859]: Accepted publickey for core from 147.75.109.163 port 37526 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:26.848355 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:26.851764 systemd-logind[1630]: New session 22 of user core. Jan 17 12:17:26.854029 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:17:27.032657 sshd[5859]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:27.034680 systemd[1]: sshd@19-139.178.70.104:22-147.75.109.163:37526.service: Deactivated successfully. Jan 17 12:17:27.036494 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:17:27.036910 systemd-logind[1630]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:17:27.037445 systemd-logind[1630]: Removed session 22. Jan 17 12:17:32.041014 systemd[1]: Started sshd@20-139.178.70.104:22-147.75.109.163:36742.service - OpenSSH per-connection server daemon (147.75.109.163:36742). Jan 17 12:17:32.079656 sshd[5873]: Accepted publickey for core from 147.75.109.163 port 36742 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:17:32.080824 sshd[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:32.083626 systemd-logind[1630]: New session 23 of user core. Jan 17 12:17:32.087923 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:17:32.237895 sshd[5873]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:32.239447 systemd-logind[1630]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:17:32.240370 systemd[1]: sshd@20-139.178.70.104:22-147.75.109.163:36742.service: Deactivated successfully. Jan 17 12:17:32.242429 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:17:32.243548 systemd-logind[1630]: Removed session 23.