Jan 13 20:32:11.728053 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:32:11.728069 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:32:11.728075 kernel: Disabled fast string operations Jan 13 20:32:11.728079 kernel: BIOS-provided physical RAM map: Jan 13 20:32:11.728083 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 13 20:32:11.728087 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 13 20:32:11.728093 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 13 20:32:11.728097 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 13 20:32:11.728101 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 13 20:32:11.728105 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 13 20:32:11.728109 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 13 20:32:11.728113 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 13 20:32:11.728117 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 13 20:32:11.728121 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 20:32:11.728127 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 13 20:32:11.728132 kernel: NX (Execute Disable) protection: active Jan 13 20:32:11.728136 kernel: APIC: Static calls initialized Jan 13 20:32:11.728141 kernel: SMBIOS 2.7 present. Jan 13 20:32:11.728146 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 13 20:32:11.728150 kernel: vmware: hypercall mode: 0x00 Jan 13 20:32:11.728155 kernel: Hypervisor detected: VMware Jan 13 20:32:11.728159 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 13 20:32:11.728165 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 13 20:32:11.728170 kernel: vmware: using clock offset of 2552865280 ns Jan 13 20:32:11.728174 kernel: tsc: Detected 3408.000 MHz processor Jan 13 20:32:11.728179 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:32:11.728184 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:32:11.728189 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 13 20:32:11.728194 kernel: total RAM covered: 3072M Jan 13 20:32:11.728213 kernel: Found optimal setting for mtrr clean up Jan 13 20:32:11.728219 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 13 20:32:11.728224 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 13 20:32:11.728231 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:32:11.728236 kernel: Using GB pages for direct mapping Jan 13 20:32:11.728240 kernel: ACPI: Early table checksum verification disabled Jan 13 20:32:11.728245 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 13 20:32:11.728250 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 13 20:32:11.728254 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 13 20:32:11.728259 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 13 20:32:11.728264 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 13 20:32:11.728271 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 13 20:32:11.728276 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 13 20:32:11.728281 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 13 20:32:11.728286 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 13 20:32:11.728291 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 13 20:32:11.728296 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 13 20:32:11.728302 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 13 20:32:11.728307 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 13 20:32:11.728312 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 13 20:32:11.728317 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 13 20:32:11.728322 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 13 20:32:11.728326 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 13 20:32:11.728331 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 13 20:32:11.728336 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 13 20:32:11.728341 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 13 20:32:11.728347 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 13 20:32:11.728352 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 13 20:32:11.728357 kernel: system APIC only can use physical flat Jan 13 20:32:11.728362 kernel: APIC: Switched APIC routing to: physical flat Jan 13 20:32:11.728367 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:32:11.728372 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 13 20:32:11.728377 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 13 20:32:11.728381 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 13 20:32:11.728386 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 13 20:32:11.728391 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 13 20:32:11.728397 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 13 20:32:11.728402 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 13 20:32:11.728406 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 13 20:32:11.728411 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 13 20:32:11.728416 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 13 20:32:11.728421 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 13 20:32:11.728425 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 13 20:32:11.728430 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 13 20:32:11.728435 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 13 20:32:11.728440 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 13 20:32:11.728445 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 13 20:32:11.728450 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 13 20:32:11.728455 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 13 20:32:11.728460 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 13 20:32:11.728465 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 13 20:32:11.728469 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 13 20:32:11.728474 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 13 20:32:11.728479 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 13 20:32:11.728484 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 13 20:32:11.728488 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 13 20:32:11.728494 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 13 20:32:11.728499 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 13 20:32:11.728504 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 13 20:32:11.728508 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 13 20:32:11.728513 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 13 20:32:11.728518 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 13 20:32:11.728523 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 13 20:32:11.728528 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 13 20:32:11.728532 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 13 20:32:11.728537 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 13 20:32:11.728543 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 13 20:32:11.728548 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 13 20:32:11.728552 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 13 20:32:11.728557 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 13 20:32:11.728562 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 13 20:32:11.728567 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 13 20:32:11.728572 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 13 20:32:11.728576 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 13 20:32:11.728581 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 13 20:32:11.728586 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 13 20:32:11.728592 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 13 20:32:11.728596 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 13 20:32:11.728601 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 13 20:32:11.728606 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 13 20:32:11.728611 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 13 20:32:11.728616 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 13 20:32:11.728620 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 13 20:32:11.728625 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 13 20:32:11.728630 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 13 20:32:11.728635 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 13 20:32:11.728639 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 13 20:32:11.728645 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 13 20:32:11.728650 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 13 20:32:11.728658 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 13 20:32:11.728664 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 13 20:32:11.728669 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 13 20:32:11.728674 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 13 20:32:11.728679 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 13 20:32:11.728685 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 13 20:32:11.728691 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 13 20:32:11.728696 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 13 20:32:11.728701 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 13 20:32:11.728706 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 13 20:32:11.728711 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 13 20:32:11.728716 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 13 20:32:11.728721 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 13 20:32:11.728726 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 13 20:32:11.728731 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 13 20:32:11.728737 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 13 20:32:11.728743 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 13 20:32:11.728748 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 13 20:32:11.728753 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 13 20:32:11.728758 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 13 20:32:11.728763 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 13 20:32:11.728768 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 13 20:32:11.728773 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 13 20:32:11.728779 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 13 20:32:11.728784 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 13 20:32:11.728789 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 13 20:32:11.728795 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 13 20:32:11.728800 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 13 20:32:11.728805 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 13 20:32:11.728810 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 13 20:32:11.728815 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 13 20:32:11.728820 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 13 20:32:11.728826 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 13 20:32:11.728831 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 13 20:32:11.728836 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 13 20:32:11.728841 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 13 20:32:11.728846 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 13 20:32:11.728852 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 13 20:32:11.728857 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 13 20:32:11.728862 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 13 20:32:11.728867 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 13 20:32:11.728872 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 13 20:32:11.728877 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 13 20:32:11.728882 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 13 20:32:11.728887 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 13 20:32:11.728893 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 13 20:32:11.728898 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 13 20:32:11.728904 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 13 20:32:11.728909 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 13 20:32:11.728914 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 13 20:32:11.728919 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 13 20:32:11.728924 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 13 20:32:11.728929 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 13 20:32:11.728934 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 13 20:32:11.728939 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 13 20:32:11.728944 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 13 20:32:11.728950 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 13 20:32:11.728956 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 13 20:32:11.728961 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 13 20:32:11.728966 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 13 20:32:11.728971 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 13 20:32:11.728976 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 13 20:32:11.728981 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 13 20:32:11.728987 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 13 20:32:11.728992 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 13 20:32:11.728997 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 13 20:32:11.729002 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 13 20:32:11.729008 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 13 20:32:11.729013 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 13 20:32:11.729018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 20:32:11.729024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 13 20:32:11.729029 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 13 20:32:11.729034 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 13 20:32:11.729040 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 13 20:32:11.729045 kernel: Zone ranges: Jan 13 20:32:11.729050 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:32:11.729055 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 13 20:32:11.729062 kernel: Normal empty Jan 13 20:32:11.729067 kernel: Movable zone start for each node Jan 13 20:32:11.729072 kernel: Early memory node ranges Jan 13 20:32:11.729077 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 13 20:32:11.729082 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 13 20:32:11.729088 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 13 20:32:11.729093 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 13 20:32:11.729098 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:32:11.729104 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 13 20:32:11.729110 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 13 20:32:11.729115 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 13 20:32:11.729121 kernel: system APIC only can use physical flat Jan 13 20:32:11.729126 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 13 20:32:11.729131 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 20:32:11.729136 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 20:32:11.729141 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 20:32:11.729146 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 20:32:11.729152 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 20:32:11.729157 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 20:32:11.729163 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 20:32:11.729168 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 20:32:11.729173 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 20:32:11.729178 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 20:32:11.729183 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 20:32:11.729189 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 20:32:11.729194 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 20:32:11.729204 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 20:32:11.729210 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 20:32:11.729216 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 20:32:11.729221 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 13 20:32:11.729226 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 13 20:32:11.729231 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 13 20:32:11.729236 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 13 20:32:11.729241 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 13 20:32:11.729247 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 13 20:32:11.729252 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 13 20:32:11.729257 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 13 20:32:11.729262 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 13 20:32:11.729268 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 13 20:32:11.729273 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 13 20:32:11.729279 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 13 20:32:11.729284 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 13 20:32:11.729289 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 13 20:32:11.729294 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 13 20:32:11.729299 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 13 20:32:11.729304 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 13 20:32:11.729309 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 13 20:32:11.729314 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 13 20:32:11.729320 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 13 20:32:11.729326 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 13 20:32:11.729331 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 13 20:32:11.729336 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 13 20:32:11.729341 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 13 20:32:11.729346 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 13 20:32:11.729351 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 13 20:32:11.729357 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 13 20:32:11.729362 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 13 20:32:11.729367 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 13 20:32:11.729373 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 13 20:32:11.729378 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 13 20:32:11.729383 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 13 20:32:11.729388 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 13 20:32:11.729393 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 13 20:32:11.729398 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 13 20:32:11.729404 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 13 20:32:11.729409 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 13 20:32:11.729414 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 13 20:32:11.729420 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 13 20:32:11.729425 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 13 20:32:11.729430 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 13 20:32:11.729435 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 13 20:32:11.729440 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 13 20:32:11.729446 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 13 20:32:11.729451 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 13 20:32:11.729456 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 13 20:32:11.729461 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 13 20:32:11.729466 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 13 20:32:11.729472 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 13 20:32:11.729478 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 13 20:32:11.729483 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 13 20:32:11.729488 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 13 20:32:11.729493 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 13 20:32:11.729499 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 13 20:32:11.729504 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 13 20:32:11.729509 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 13 20:32:11.729514 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 13 20:32:11.729519 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 13 20:32:11.729525 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 13 20:32:11.729531 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 13 20:32:11.729536 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 13 20:32:11.729541 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 13 20:32:11.729546 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 13 20:32:11.729551 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 13 20:32:11.729556 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 13 20:32:11.729561 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 13 20:32:11.729566 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 13 20:32:11.729573 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 13 20:32:11.729587 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 13 20:32:11.729594 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 13 20:32:11.729599 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 13 20:32:11.729604 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 13 20:32:11.729609 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 13 20:32:11.729614 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 13 20:32:11.729619 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 13 20:32:11.729624 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 13 20:32:11.729629 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 13 20:32:11.729636 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 13 20:32:11.729642 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 13 20:32:11.729647 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 13 20:32:11.729652 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 13 20:32:11.729657 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 13 20:32:11.729662 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 13 20:32:11.729667 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 13 20:32:11.729673 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 13 20:32:11.729678 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 13 20:32:11.729683 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 13 20:32:11.729689 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 13 20:32:11.729694 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 13 20:32:11.729699 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 13 20:32:11.729704 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 13 20:32:11.729710 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 13 20:32:11.729715 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 13 20:32:11.729720 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 13 20:32:11.729725 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 13 20:32:11.729730 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 13 20:32:11.729735 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 13 20:32:11.729741 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 13 20:32:11.729746 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 13 20:32:11.729751 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 13 20:32:11.729756 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 13 20:32:11.729762 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 13 20:32:11.729767 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 13 20:32:11.729772 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 13 20:32:11.729777 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 13 20:32:11.729782 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 13 20:32:11.729788 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 13 20:32:11.729794 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 13 20:32:11.729799 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 13 20:32:11.729804 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 13 20:32:11.729809 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 13 20:32:11.729814 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:32:11.729819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 13 20:32:11.729825 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:32:11.729830 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 13 20:32:11.729835 kernel: TSC deadline timer available Jan 13 20:32:11.729842 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 13 20:32:11.729847 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 13 20:32:11.729852 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 13 20:32:11.729858 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:32:11.729863 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 13 20:32:11.729868 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 20:32:11.729873 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 20:32:11.729879 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 13 20:32:11.729888 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 13 20:32:11.729896 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 13 20:32:11.729901 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 13 20:32:11.729906 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 13 20:32:11.729918 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 13 20:32:11.729925 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 13 20:32:11.729930 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 13 20:32:11.729935 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 13 20:32:11.729941 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 13 20:32:11.729947 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 13 20:32:11.729953 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 13 20:32:11.729958 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 13 20:32:11.729964 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 13 20:32:11.729969 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 13 20:32:11.729975 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 13 20:32:11.729981 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:32:11.729987 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:32:11.729993 kernel: random: crng init done Jan 13 20:32:11.729998 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 13 20:32:11.730004 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 13 20:32:11.730010 kernel: printk: log_buf_len min size: 262144 bytes Jan 13 20:32:11.730015 kernel: printk: log_buf_len: 1048576 bytes Jan 13 20:32:11.730021 kernel: printk: early log buf free: 239648(91%) Jan 13 20:32:11.730026 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:32:11.730032 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:32:11.730037 kernel: Fallback order for Node 0: 0 Jan 13 20:32:11.730045 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 13 20:32:11.730050 kernel: Policy zone: DMA32 Jan 13 20:32:11.730056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:32:11.730062 kernel: Memory: 1934328K/2096628K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 162040K reserved, 0K cma-reserved) Jan 13 20:32:11.730068 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 13 20:32:11.730074 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:32:11.730081 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:32:11.730086 kernel: Dynamic Preempt: voluntary Jan 13 20:32:11.730092 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:32:11.730098 kernel: rcu: RCU event tracing is enabled. Jan 13 20:32:11.730103 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 13 20:32:11.730109 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:32:11.730115 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:32:11.730120 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:32:11.730126 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:32:11.730132 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 13 20:32:11.730138 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 13 20:32:11.730144 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 13 20:32:11.730149 kernel: Console: colour VGA+ 80x25 Jan 13 20:32:11.730155 kernel: printk: console [tty0] enabled Jan 13 20:32:11.730160 kernel: printk: console [ttyS0] enabled Jan 13 20:32:11.730166 kernel: ACPI: Core revision 20230628 Jan 13 20:32:11.730171 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 13 20:32:11.730177 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:32:11.730182 kernel: x2apic enabled Jan 13 20:32:11.730189 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:32:11.730231 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:32:11.730240 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 13 20:32:11.730245 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 13 20:32:11.730251 kernel: Disabled fast string operations Jan 13 20:32:11.730256 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:32:11.730262 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:32:11.730268 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:32:11.730273 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 20:32:11.730281 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 20:32:11.730287 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 20:32:11.730292 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:32:11.730298 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 20:32:11.730303 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 20:32:11.730309 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:32:11.730315 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:32:11.730320 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:32:11.730327 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 13 20:32:11.730332 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:32:11.730338 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:32:11.730344 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:32:11.730349 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:32:11.730355 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:32:11.730360 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:32:11.730366 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:32:11.730372 kernel: pid_max: default: 131072 minimum: 1024 Jan 13 20:32:11.730378 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:32:11.730384 kernel: landlock: Up and running. Jan 13 20:32:11.730390 kernel: SELinux: Initializing. Jan 13 20:32:11.730395 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:32:11.730401 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:32:11.730407 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 20:32:11.730412 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 20:32:11.730418 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 20:32:11.730424 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 20:32:11.730430 kernel: Performance Events: Skylake events, core PMU driver. Jan 13 20:32:11.730436 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 13 20:32:11.730442 kernel: core: CPUID marked event: 'instructions' unavailable Jan 13 20:32:11.730447 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 13 20:32:11.730452 kernel: core: CPUID marked event: 'cache references' unavailable Jan 13 20:32:11.730458 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 13 20:32:11.730463 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 13 20:32:11.730470 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 13 20:32:11.730476 kernel: ... version: 1 Jan 13 20:32:11.730482 kernel: ... bit width: 48 Jan 13 20:32:11.730487 kernel: ... generic registers: 4 Jan 13 20:32:11.730493 kernel: ... value mask: 0000ffffffffffff Jan 13 20:32:11.730498 kernel: ... max period: 000000007fffffff Jan 13 20:32:11.730504 kernel: ... fixed-purpose events: 0 Jan 13 20:32:11.730509 kernel: ... event mask: 000000000000000f Jan 13 20:32:11.730515 kernel: signal: max sigframe size: 1776 Jan 13 20:32:11.730521 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:32:11.730526 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:32:11.730533 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:32:11.730538 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:32:11.730544 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:32:11.730550 kernel: .... node #0, CPUs: #1 Jan 13 20:32:11.730555 kernel: Disabled fast string operations Jan 13 20:32:11.730561 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 13 20:32:11.730566 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 13 20:32:11.730572 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:32:11.730577 kernel: smpboot: Max logical packages: 128 Jan 13 20:32:11.730583 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 13 20:32:11.730589 kernel: devtmpfs: initialized Jan 13 20:32:11.730595 kernel: x86/mm: Memory block size: 128MB Jan 13 20:32:11.730600 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 13 20:32:11.730606 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:32:11.730612 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 13 20:32:11.730617 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:32:11.730623 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:32:11.730628 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:32:11.730634 kernel: audit: type=2000 audit(1736800330.066:1): state=initialized audit_enabled=0 res=1 Jan 13 20:32:11.730640 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:32:11.730646 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:32:11.730651 kernel: cpuidle: using governor menu Jan 13 20:32:11.730657 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 13 20:32:11.730663 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:32:11.730668 kernel: dca service started, version 1.12.1 Jan 13 20:32:11.730674 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 13 20:32:11.730679 kernel: PCI: Using configuration type 1 for base access Jan 13 20:32:11.730686 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:32:11.730692 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:32:11.730698 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:32:11.730703 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:32:11.730709 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:32:11.730715 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:32:11.730720 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:32:11.730726 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:32:11.730731 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:32:11.730738 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:32:11.730743 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 13 20:32:11.730749 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:32:11.730755 kernel: ACPI: Interpreter enabled Jan 13 20:32:11.730760 kernel: ACPI: PM: (supports S0 S1 S5) Jan 13 20:32:11.730766 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:32:11.730771 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:32:11.730777 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:32:11.730782 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 13 20:32:11.730789 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 13 20:32:11.730864 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:32:11.730958 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 13 20:32:11.731009 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 13 20:32:11.731017 kernel: PCI host bridge to bus 0000:00 Jan 13 20:32:11.731064 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:32:11.731107 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 13 20:32:11.731151 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:32:11.731193 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:32:11.731243 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 13 20:32:11.731285 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 13 20:32:11.731341 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 13 20:32:11.731395 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 13 20:32:11.731452 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 13 20:32:11.731505 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 13 20:32:11.731555 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 13 20:32:11.731603 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 20:32:11.731650 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 20:32:11.731699 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 20:32:11.731746 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 20:32:11.731802 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 13 20:32:11.731850 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 13 20:32:11.731897 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 13 20:32:11.731948 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 13 20:32:11.731996 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 13 20:32:11.732044 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 13 20:32:11.732097 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 13 20:32:11.732145 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 13 20:32:11.732193 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 13 20:32:11.732546 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 13 20:32:11.732597 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 13 20:32:11.732646 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:32:11.732699 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 13 20:32:11.732755 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.732804 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.732856 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.732908 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.733018 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.733067 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.733124 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.733172 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.733274 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.733340 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734291 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734346 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734403 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734453 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734506 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734555 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734607 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734655 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734709 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734757 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734809 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734857 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734910 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734958 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.735012 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.735060 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.735111 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.735158 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737255 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737316 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737377 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737428 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737482 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737531 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737585 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737634 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737689 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737738 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737793 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737842 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737893 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737941 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737993 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738044 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738095 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738143 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738202 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738255 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738308 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738360 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738411 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738459 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738511 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738559 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738613 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738669 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738724 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738772 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738824 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738873 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738924 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738975 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.739027 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.739075 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.739124 kernel: pci_bus 0000:01: extended config space not accessible Jan 13 20:32:11.739174 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 20:32:11.739967 kernel: pci_bus 0000:02: extended config space not accessible Jan 13 20:32:11.739979 kernel: acpiphp: Slot [32] registered Jan 13 20:32:11.739987 kernel: acpiphp: Slot [33] registered Jan 13 20:32:11.739993 kernel: acpiphp: Slot [34] registered Jan 13 20:32:11.739999 kernel: acpiphp: Slot [35] registered Jan 13 20:32:11.740004 kernel: acpiphp: Slot [36] registered Jan 13 20:32:11.740010 kernel: acpiphp: Slot [37] registered Jan 13 20:32:11.740015 kernel: acpiphp: Slot [38] registered Jan 13 20:32:11.740021 kernel: acpiphp: Slot [39] registered Jan 13 20:32:11.740026 kernel: acpiphp: Slot [40] registered Jan 13 20:32:11.740032 kernel: acpiphp: Slot [41] registered Jan 13 20:32:11.740039 kernel: acpiphp: Slot [42] registered Jan 13 20:32:11.740044 kernel: acpiphp: Slot [43] registered Jan 13 20:32:11.740050 kernel: acpiphp: Slot [44] registered Jan 13 20:32:11.740055 kernel: acpiphp: Slot [45] registered Jan 13 20:32:11.740061 kernel: acpiphp: Slot [46] registered Jan 13 20:32:11.740066 kernel: acpiphp: Slot [47] registered Jan 13 20:32:11.740076 kernel: acpiphp: Slot [48] registered Jan 13 20:32:11.740082 kernel: acpiphp: Slot [49] registered Jan 13 20:32:11.740088 kernel: acpiphp: Slot [50] registered Jan 13 20:32:11.740095 kernel: acpiphp: Slot [51] registered Jan 13 20:32:11.740100 kernel: acpiphp: Slot [52] registered Jan 13 20:32:11.740106 kernel: acpiphp: Slot [53] registered Jan 13 20:32:11.740111 kernel: acpiphp: Slot [54] registered Jan 13 20:32:11.740117 kernel: acpiphp: Slot [55] registered Jan 13 20:32:11.740122 kernel: acpiphp: Slot [56] registered Jan 13 20:32:11.740128 kernel: acpiphp: Slot [57] registered Jan 13 20:32:11.740133 kernel: acpiphp: Slot [58] registered Jan 13 20:32:11.740139 kernel: acpiphp: Slot [59] registered Jan 13 20:32:11.740144 kernel: acpiphp: Slot [60] registered Jan 13 20:32:11.740151 kernel: acpiphp: Slot [61] registered Jan 13 20:32:11.740156 kernel: acpiphp: Slot [62] registered Jan 13 20:32:11.740162 kernel: acpiphp: Slot [63] registered Jan 13 20:32:11.740255 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 13 20:32:11.740309 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 13 20:32:11.740358 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 13 20:32:11.740406 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 20:32:11.740454 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 13 20:32:11.740505 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 13 20:32:11.740552 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 13 20:32:11.740600 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 13 20:32:11.740647 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 13 20:32:11.740701 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 13 20:32:11.740751 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 13 20:32:11.740800 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 13 20:32:11.740853 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 13 20:32:11.740926 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.741018 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 13 20:32:11.741082 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 13 20:32:11.741146 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 13 20:32:11.741194 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 13 20:32:11.741250 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 13 20:32:11.741299 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 13 20:32:11.741350 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 13 20:32:11.741397 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 20:32:11.741445 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 13 20:32:11.741493 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 13 20:32:11.741540 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 13 20:32:11.741588 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 20:32:11.741636 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 13 20:32:11.741687 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 13 20:32:11.741794 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 20:32:11.741911 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 13 20:32:11.741964 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 13 20:32:11.742016 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 20:32:11.742071 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 13 20:32:11.742123 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 13 20:32:11.742173 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 20:32:11.742276 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 13 20:32:11.742327 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 13 20:32:11.742377 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 20:32:11.742426 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 13 20:32:11.742476 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 13 20:32:11.742528 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 20:32:11.742584 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 13 20:32:11.742636 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 13 20:32:11.742688 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 13 20:32:11.742739 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 13 20:32:11.742790 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 13 20:32:11.742841 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 13 20:32:11.742913 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 13 20:32:11.742981 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:32:11.743032 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 13 20:32:11.743082 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 13 20:32:11.743132 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 13 20:32:11.743181 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 13 20:32:11.743245 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 13 20:32:11.743296 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 13 20:32:11.743350 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 13 20:32:11.743400 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 20:32:11.743450 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 13 20:32:11.743500 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 13 20:32:11.743549 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 13 20:32:11.743599 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 20:32:11.743649 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 13 20:32:11.743702 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 13 20:32:11.743752 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 20:32:11.743802 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 13 20:32:11.743852 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 13 20:32:11.743905 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 20:32:11.743956 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 13 20:32:11.744006 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 13 20:32:11.744056 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 20:32:11.744109 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 13 20:32:11.744159 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 13 20:32:11.744487 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 20:32:11.744545 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 13 20:32:11.744596 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 13 20:32:11.744646 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 20:32:11.744695 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 13 20:32:11.744744 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 13 20:32:11.744795 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 13 20:32:11.744844 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 20:32:11.744893 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 13 20:32:11.744957 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 13 20:32:11.745042 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 13 20:32:11.745094 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 20:32:11.745145 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 13 20:32:11.745228 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 13 20:32:11.745285 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 13 20:32:11.745336 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 20:32:11.745386 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 13 20:32:11.745436 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 13 20:32:11.745485 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 20:32:11.745536 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 13 20:32:11.745585 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 13 20:32:11.745635 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 20:32:11.745688 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 13 20:32:11.745738 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 13 20:32:11.745787 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 20:32:11.745837 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 13 20:32:11.745886 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 13 20:32:11.745935 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 20:32:11.745985 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 13 20:32:11.746034 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 13 20:32:11.746086 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 20:32:11.746136 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 13 20:32:11.746186 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 13 20:32:11.746254 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 13 20:32:11.746306 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 20:32:11.746358 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 13 20:32:11.746407 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 13 20:32:11.746461 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 13 20:32:11.746510 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 20:32:11.746560 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 13 20:32:11.746610 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 13 20:32:11.746660 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 20:32:11.746710 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 13 20:32:11.746760 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 13 20:32:11.746809 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 20:32:11.746861 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 13 20:32:11.746920 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 13 20:32:11.746971 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 20:32:11.747021 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 13 20:32:11.747071 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 13 20:32:11.747120 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 20:32:11.747170 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 13 20:32:11.747289 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 13 20:32:11.747342 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 20:32:11.747391 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 13 20:32:11.747438 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 13 20:32:11.747485 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 20:32:11.747493 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 13 20:32:11.747500 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 13 20:32:11.747506 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 13 20:32:11.747511 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:32:11.747517 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 13 20:32:11.747525 kernel: iommu: Default domain type: Translated Jan 13 20:32:11.747531 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:32:11.747536 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:32:11.747542 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:32:11.747548 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 13 20:32:11.747554 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 13 20:32:11.747599 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 13 20:32:11.747647 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 13 20:32:11.747694 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:32:11.747704 kernel: vgaarb: loaded Jan 13 20:32:11.747710 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 13 20:32:11.747716 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 13 20:32:11.747721 kernel: clocksource: Switched to clocksource tsc-early Jan 13 20:32:11.747727 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:32:11.747733 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:32:11.747742 kernel: pnp: PnP ACPI init Jan 13 20:32:11.747830 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 13 20:32:11.747881 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 13 20:32:11.747926 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 13 20:32:11.747974 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 13 20:32:11.748020 kernel: pnp 00:06: [dma 2] Jan 13 20:32:11.748068 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 13 20:32:11.748112 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 13 20:32:11.748159 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 13 20:32:11.748167 kernel: pnp: PnP ACPI: found 8 devices Jan 13 20:32:11.748173 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:32:11.748179 kernel: NET: Registered PF_INET protocol family Jan 13 20:32:11.748185 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:32:11.748190 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:32:11.748236 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:32:11.748243 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:32:11.748249 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:32:11.748257 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:32:11.748279 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:32:11.748285 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:32:11.748310 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:32:11.748316 kernel: NET: Registered PF_XDP protocol family Jan 13 20:32:11.748397 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 13 20:32:11.748447 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:32:11.748496 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:32:11.748548 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:32:11.748597 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:32:11.748646 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 13 20:32:11.748695 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 13 20:32:11.748744 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 13 20:32:11.748796 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 13 20:32:11.748844 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 13 20:32:11.748893 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 13 20:32:11.748941 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 13 20:32:11.748989 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 13 20:32:11.749038 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 13 20:32:11.749089 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 13 20:32:11.749138 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 13 20:32:11.749186 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 13 20:32:11.749280 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 13 20:32:11.749330 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 13 20:32:11.749379 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 13 20:32:11.749430 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 13 20:32:11.749478 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 13 20:32:11.749526 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 13 20:32:11.749574 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 20:32:11.749621 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 20:32:11.749669 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.749719 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.749769 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.749817 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.749865 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.749916 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.749964 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750011 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750059 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750107 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750157 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750222 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750273 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750331 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750380 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750429 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750476 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750525 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750576 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750623 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750671 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750719 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750766 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750815 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750863 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750948 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750999 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.751047 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.751095 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.751142 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.751189 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752270 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752336 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752388 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752441 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752490 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752539 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752588 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752635 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752683 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752775 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752841 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752893 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752976 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.753026 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.753075 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.753124 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.753173 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754185 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754257 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754310 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754363 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754412 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754461 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754509 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754558 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754606 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754653 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754701 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754749 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754797 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754847 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754895 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754943 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755006 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755054 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755102 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755151 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755211 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755261 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755312 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755359 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755407 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755454 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755501 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755549 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755597 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755645 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755693 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755741 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755791 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755839 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755887 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755935 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755986 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 20:32:11.756034 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 13 20:32:11.756083 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 13 20:32:11.756147 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 13 20:32:11.756663 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 20:32:11.756733 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 13 20:32:11.756788 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 13 20:32:11.756839 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 13 20:32:11.756896 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 13 20:32:11.756969 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 20:32:11.757019 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 13 20:32:11.757066 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 13 20:32:11.757114 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 13 20:32:11.757164 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 20:32:11.757225 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 13 20:32:11.757276 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 13 20:32:11.757324 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 13 20:32:11.757371 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 20:32:11.757419 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 13 20:32:11.757467 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 13 20:32:11.757515 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 20:32:11.757562 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 13 20:32:11.757610 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 13 20:32:11.757661 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 20:32:11.757712 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 13 20:32:11.757761 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 13 20:32:11.757809 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 20:32:11.757856 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 13 20:32:11.757913 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 13 20:32:11.757969 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 20:32:11.758018 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 13 20:32:11.758066 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 13 20:32:11.758114 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 20:32:11.758178 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 13 20:32:11.758280 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 13 20:32:11.758329 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 13 20:32:11.758379 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 13 20:32:11.758427 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 20:32:11.758478 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 13 20:32:11.758526 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 13 20:32:11.758574 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 13 20:32:11.758622 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 20:32:11.758671 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 13 20:32:11.758719 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 13 20:32:11.758766 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 13 20:32:11.758814 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 20:32:11.758862 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 13 20:32:11.758961 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 13 20:32:11.759010 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 20:32:11.759058 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 13 20:32:11.759106 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 13 20:32:11.759154 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 20:32:11.759231 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 13 20:32:11.759281 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 13 20:32:11.759330 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 20:32:11.759378 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 13 20:32:11.759426 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 13 20:32:11.759477 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 20:32:11.759525 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 13 20:32:11.759573 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 13 20:32:11.759621 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 20:32:11.759669 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 13 20:32:11.759717 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 13 20:32:11.759765 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 13 20:32:11.759813 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 20:32:11.759860 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 13 20:32:11.759949 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 13 20:32:11.759998 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 13 20:32:11.760046 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 20:32:11.760094 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 13 20:32:11.760142 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 13 20:32:11.760190 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 13 20:32:11.760257 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 20:32:11.760306 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 13 20:32:11.760354 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 13 20:32:11.760402 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 20:32:11.760453 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 13 20:32:11.760501 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 13 20:32:11.760549 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 20:32:11.760597 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 13 20:32:11.760645 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 13 20:32:11.760693 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 20:32:11.760741 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 13 20:32:11.760789 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 13 20:32:11.760837 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 20:32:11.760911 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 13 20:32:11.760974 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 13 20:32:11.761022 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 20:32:11.761079 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 13 20:32:11.761128 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 13 20:32:11.761176 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 13 20:32:11.761243 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 20:32:11.761294 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 13 20:32:11.761343 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 13 20:32:11.761393 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 13 20:32:11.761450 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 20:32:11.761502 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 13 20:32:11.761551 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 13 20:32:11.761599 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 20:32:11.761648 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 13 20:32:11.761696 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 13 20:32:11.761744 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 20:32:11.761793 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 13 20:32:11.761841 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 13 20:32:11.761893 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 20:32:11.761973 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 13 20:32:11.762021 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 13 20:32:11.762069 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 20:32:11.762117 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 13 20:32:11.762166 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 13 20:32:11.762251 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 20:32:11.762300 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 13 20:32:11.762349 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 13 20:32:11.762397 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 20:32:11.762448 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 13 20:32:11.762492 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 13 20:32:11.762534 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 13 20:32:11.762577 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 13 20:32:11.762630 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 13 20:32:11.762680 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 13 20:32:11.762724 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 13 20:32:11.762804 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 20:32:11.762851 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 13 20:32:11.762916 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 13 20:32:11.762990 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 13 20:32:11.763034 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 13 20:32:11.763079 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 13 20:32:11.763127 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 13 20:32:11.763175 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 13 20:32:11.763237 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 20:32:11.763287 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 13 20:32:11.763332 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 13 20:32:11.763377 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 20:32:11.763427 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 13 20:32:11.763472 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 13 20:32:11.763525 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 20:32:11.763580 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 13 20:32:11.763625 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 20:32:11.763672 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 13 20:32:11.763717 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 20:32:11.763764 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 13 20:32:11.763809 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 20:32:11.763859 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 13 20:32:11.763924 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 20:32:11.763975 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 13 20:32:11.764029 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 20:32:11.764080 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 13 20:32:11.764133 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 13 20:32:11.764179 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 20:32:11.764256 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 13 20:32:11.764303 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 13 20:32:11.764350 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 20:32:11.764402 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 13 20:32:11.764449 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 13 20:32:11.764500 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 20:32:11.764550 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 13 20:32:11.764595 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 20:32:11.764645 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 13 20:32:11.764691 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 20:32:11.764739 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 13 20:32:11.764788 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 20:32:11.764837 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 13 20:32:11.764883 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 20:32:11.764932 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 13 20:32:11.764979 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 20:32:11.765028 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 13 20:32:11.765076 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 13 20:32:11.765122 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 20:32:11.765171 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 13 20:32:11.765229 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 13 20:32:11.765276 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 20:32:11.765326 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 13 20:32:11.765372 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 13 20:32:11.765525 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 20:32:11.765753 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 13 20:32:11.765803 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 20:32:11.765881 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 13 20:32:11.765930 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 20:32:11.765980 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 13 20:32:11.766026 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 20:32:11.766079 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 13 20:32:11.766125 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 20:32:11.766174 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 13 20:32:11.766231 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 20:32:11.766285 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 13 20:32:11.766333 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 13 20:32:11.766379 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 20:32:11.766427 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 13 20:32:11.766473 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 13 20:32:11.766518 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 20:32:11.766567 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 13 20:32:11.766612 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 20:32:11.766667 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 13 20:32:11.766714 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 20:32:11.766763 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 13 20:32:11.766809 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 20:32:11.766858 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 13 20:32:11.766922 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 20:32:11.766974 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 13 20:32:11.767036 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 20:32:11.767100 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 13 20:32:11.767146 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 20:32:11.767204 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:32:11.767215 kernel: PCI: CLS 32 bytes, default 64 Jan 13 20:32:11.767221 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:32:11.767229 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 13 20:32:11.767236 kernel: clocksource: Switched to clocksource tsc Jan 13 20:32:11.767242 kernel: Initialise system trusted keyrings Jan 13 20:32:11.767249 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:32:11.767256 kernel: Key type asymmetric registered Jan 13 20:32:11.767261 kernel: Asymmetric key parser 'x509' registered Jan 13 20:32:11.767267 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:32:11.767273 kernel: io scheduler mq-deadline registered Jan 13 20:32:11.767279 kernel: io scheduler kyber registered Jan 13 20:32:11.767286 kernel: io scheduler bfq registered Jan 13 20:32:11.767336 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 13 20:32:11.767387 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767453 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 13 20:32:11.767507 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767557 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 13 20:32:11.767607 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767657 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 13 20:32:11.767711 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767767 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 13 20:32:11.767817 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767866 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 13 20:32:11.767915 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767967 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 13 20:32:11.768022 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768077 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 13 20:32:11.768127 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768176 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 13 20:32:11.768242 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768292 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 13 20:32:11.768342 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768391 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 13 20:32:11.768440 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768495 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 13 20:32:11.768564 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768619 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 13 20:32:11.768668 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768717 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 13 20:32:11.768766 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768815 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 13 20:32:11.768866 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768916 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 13 20:32:11.768965 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769015 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 13 20:32:11.769069 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769144 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 13 20:32:11.769229 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769286 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 13 20:32:11.769335 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769383 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 13 20:32:11.769432 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769482 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 13 20:32:11.769531 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769582 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 13 20:32:11.769632 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769681 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 13 20:32:11.769730 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769780 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 13 20:32:11.769832 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769880 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 13 20:32:11.769933 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769981 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 13 20:32:11.770029 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770078 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 13 20:32:11.770144 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770203 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 13 20:32:11.770259 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770309 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 13 20:32:11.770358 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770414 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 13 20:32:11.770476 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770526 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 13 20:32:11.770575 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770624 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 13 20:32:11.770673 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770684 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:32:11.770691 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:32:11.770697 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:32:11.770703 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 13 20:32:11.770709 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:32:11.770715 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:32:11.770764 kernel: rtc_cmos 00:01: registered as rtc0 Jan 13 20:32:11.770810 kernel: rtc_cmos 00:01: setting system clock to 2025-01-13T20:32:11 UTC (1736800331) Jan 13 20:32:11.770857 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 13 20:32:11.770866 kernel: intel_pstate: CPU model not supported Jan 13 20:32:11.770872 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:32:11.770879 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:32:11.770889 kernel: Segment Routing with IPv6 Jan 13 20:32:11.770913 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:32:11.770920 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:32:11.770926 kernel: Key type dns_resolver registered Jan 13 20:32:11.770934 kernel: IPI shorthand broadcast: enabled Jan 13 20:32:11.770940 kernel: sched_clock: Marking stable (861003560, 220489294)->(1134660177, -53167323) Jan 13 20:32:11.770946 kernel: registered taskstats version 1 Jan 13 20:32:11.770952 kernel: Loading compiled-in X.509 certificates Jan 13 20:32:11.770958 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:32:11.770964 kernel: Key type .fscrypt registered Jan 13 20:32:11.770984 kernel: Key type fscrypt-provisioning registered Jan 13 20:32:11.770990 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:32:11.770996 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:32:11.771003 kernel: ima: No architecture policies found Jan 13 20:32:11.771009 kernel: clk: Disabling unused clocks Jan 13 20:32:11.771015 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:32:11.771021 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:32:11.771027 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:32:11.771033 kernel: Run /init as init process Jan 13 20:32:11.771038 kernel: with arguments: Jan 13 20:32:11.771045 kernel: /init Jan 13 20:32:11.771050 kernel: with environment: Jan 13 20:32:11.771057 kernel: HOME=/ Jan 13 20:32:11.771063 kernel: TERM=linux Jan 13 20:32:11.771069 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:32:11.771076 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:32:11.771084 systemd[1]: Detected virtualization vmware. Jan 13 20:32:11.771090 systemd[1]: Detected architecture x86-64. Jan 13 20:32:11.771096 systemd[1]: Running in initrd. Jan 13 20:32:11.771102 systemd[1]: No hostname configured, using default hostname. Jan 13 20:32:11.771110 systemd[1]: Hostname set to . Jan 13 20:32:11.771116 systemd[1]: Initializing machine ID from random generator. Jan 13 20:32:11.771123 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:32:11.771129 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:32:11.771135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:32:11.771142 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:32:11.771148 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:32:11.771154 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:32:11.771161 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:32:11.771168 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:32:11.771175 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:32:11.771181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:32:11.771187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:32:11.771193 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:32:11.771284 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:32:11.771293 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:32:11.771299 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:32:11.771305 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:32:11.771312 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:32:11.771318 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:32:11.771324 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:32:11.771330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:32:11.771336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:32:11.771342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:32:11.771350 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:32:11.771356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:32:11.771363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:32:11.771369 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:32:11.771375 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:32:11.771381 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:32:11.771387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:32:11.771393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:11.771401 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:32:11.771419 systemd-journald[217]: Collecting audit messages is disabled. Jan 13 20:32:11.771436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:32:11.771442 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:32:11.771450 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:32:11.771457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:11.771463 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:32:11.771469 kernel: Bridge firewalling registered Jan 13 20:32:11.771476 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:32:11.771483 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:32:11.771490 systemd-journald[217]: Journal started Jan 13 20:32:11.771504 systemd-journald[217]: Runtime Journal (/run/log/journal/aff5646d89484528aec676137f0a08e0) is 4.8M, max 38.6M, 33.8M free. Jan 13 20:32:11.744296 systemd-modules-load[218]: Inserted module 'overlay' Jan 13 20:32:11.769084 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 13 20:32:11.773497 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:32:11.774210 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:32:11.779344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:32:11.780181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:32:11.786266 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:32:11.786551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:11.786795 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:32:11.789349 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:32:11.797355 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:32:11.800289 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:32:11.800568 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:32:11.804982 dracut-cmdline[249]: dracut-dracut-053 Jan 13 20:32:11.809060 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:32:11.820136 systemd-resolved[250]: Positive Trust Anchors: Jan 13 20:32:11.820143 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:32:11.820165 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:32:11.821993 systemd-resolved[250]: Defaulting to hostname 'linux'. Jan 13 20:32:11.822582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:32:11.822709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:32:11.850207 kernel: SCSI subsystem initialized Jan 13 20:32:11.855206 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:32:11.862209 kernel: iscsi: registered transport (tcp) Jan 13 20:32:11.874207 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:32:11.874222 kernel: QLogic iSCSI HBA Driver Jan 13 20:32:11.892738 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:32:11.896314 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:32:11.910225 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:32:11.910245 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:32:11.910254 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:32:11.942242 kernel: raid6: avx2x4 gen() 49340 MB/s Jan 13 20:32:11.958240 kernel: raid6: avx2x2 gen() 55653 MB/s Jan 13 20:32:11.975395 kernel: raid6: avx2x1 gen() 46705 MB/s Jan 13 20:32:11.975412 kernel: raid6: using algorithm avx2x2 gen() 55653 MB/s Jan 13 20:32:11.993395 kernel: raid6: .... xor() 33429 MB/s, rmw enabled Jan 13 20:32:11.993417 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:32:12.006204 kernel: xor: automatically using best checksumming function avx Jan 13 20:32:12.091211 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:32:12.096376 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:32:12.101289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:32:12.108309 systemd-udevd[434]: Using default interface naming scheme 'v255'. Jan 13 20:32:12.110617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:32:12.121321 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:32:12.127925 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Jan 13 20:32:12.142840 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:32:12.150413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:32:12.217349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:32:12.222284 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:32:12.229421 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:32:12.230082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:32:12.230840 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:32:12.231240 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:32:12.236304 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:32:12.243278 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:32:12.281206 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 13 20:32:12.283205 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 13 20:32:12.287327 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 13 20:32:12.291231 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 13 20:32:12.295865 kernel: vmw_pvscsi: using 64bit dma Jan 13 20:32:12.295883 kernel: vmw_pvscsi: max_id: 16 Jan 13 20:32:12.295892 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 13 20:32:12.298215 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 13 20:32:12.298232 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 13 20:32:12.298244 kernel: vmw_pvscsi: using MSI-X Jan 13 20:32:12.301240 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 13 20:32:12.304293 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 13 20:32:12.309370 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:32:12.309380 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 13 20:32:12.309458 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 13 20:32:12.315516 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:32:12.315736 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:12.316074 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:32:12.316173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:32:12.316258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:12.316370 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:12.322531 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:32:12.322550 kernel: AES CTR mode by8 optimization enabled Jan 13 20:32:12.322667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:12.323205 kernel: libata version 3.00 loaded. Jan 13 20:32:12.325204 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 13 20:32:12.333087 kernel: scsi host1: ata_piix Jan 13 20:32:12.333158 kernel: scsi host2: ata_piix Jan 13 20:32:12.333267 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 13 20:32:12.333276 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 13 20:32:12.341782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:12.345292 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:32:12.357415 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:12.499217 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 13 20:32:12.506214 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 13 20:32:12.515628 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 13 20:32:12.523099 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 13 20:32:12.523375 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 13 20:32:12.523463 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 13 20:32:12.523541 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 13 20:32:12.523615 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:32:12.523626 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 13 20:32:12.540233 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 13 20:32:12.551408 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:32:12.551422 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (495) Jan 13 20:32:12.551430 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:32:12.555294 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (490) Jan 13 20:32:12.554039 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 13 20:32:12.557839 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 13 20:32:12.560470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 13 20:32:12.562594 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 13 20:32:12.562713 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 13 20:32:12.570289 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:32:12.596478 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:32:13.618725 disk-uuid[593]: The operation has completed successfully. Jan 13 20:32:13.621208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:32:13.655767 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:32:13.656037 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:32:13.659306 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:32:13.660994 sh[611]: Success Jan 13 20:32:13.669209 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:32:13.714000 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:32:13.726097 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:32:13.726466 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:32:13.741762 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:32:13.741785 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:13.741794 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:32:13.742921 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:32:13.744384 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:32:13.751210 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:32:13.753346 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:32:13.763337 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 13 20:32:13.764849 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:32:13.786475 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:13.786499 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:13.786509 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:32:13.834223 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:32:13.840577 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:32:13.843218 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:13.847445 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:32:13.856716 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:32:13.863370 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 13 20:32:13.868323 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:32:13.920624 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:32:13.931345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:32:13.943353 systemd-networkd[799]: lo: Link UP Jan 13 20:32:13.943361 systemd-networkd[799]: lo: Gained carrier Jan 13 20:32:13.944183 systemd-networkd[799]: Enumeration completed Jan 13 20:32:13.944504 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:32:13.944550 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 13 20:32:13.948628 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 13 20:32:13.948747 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 13 20:32:13.947925 systemd-networkd[799]: ens192: Link UP Jan 13 20:32:13.947927 systemd-networkd[799]: ens192: Gained carrier Jan 13 20:32:13.948378 systemd[1]: Reached target network.target - Network. Jan 13 20:32:13.960942 ignition[671]: Ignition 2.20.0 Jan 13 20:32:13.961187 ignition[671]: Stage: fetch-offline Jan 13 20:32:13.961339 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:13.961464 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:13.961652 ignition[671]: parsed url from cmdline: "" Jan 13 20:32:13.961682 ignition[671]: no config URL provided Jan 13 20:32:13.961791 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:32:13.961924 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:32:13.962406 ignition[671]: config successfully fetched Jan 13 20:32:13.962423 ignition[671]: parsing config with SHA512: 18820ce99c1def43d174d421c514abab9449b228f2fa0af38cfea748a5ce5e813085bce6db485e78de311edad2bcf5c2814e9a9fddddc48054d7de85048e3fd7 Jan 13 20:32:13.964919 unknown[671]: fetched base config from "system" Jan 13 20:32:13.964930 unknown[671]: fetched user config from "vmware" Jan 13 20:32:13.965265 ignition[671]: fetch-offline: fetch-offline passed Jan 13 20:32:13.965310 ignition[671]: Ignition finished successfully Jan 13 20:32:13.966026 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:32:13.966378 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:32:13.970291 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:32:13.977034 ignition[808]: Ignition 2.20.0 Jan 13 20:32:13.977040 ignition[808]: Stage: kargs Jan 13 20:32:13.977160 ignition[808]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:13.977166 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:13.977808 ignition[808]: kargs: kargs passed Jan 13 20:32:13.977851 ignition[808]: Ignition finished successfully Jan 13 20:32:13.978788 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:32:13.986431 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:32:13.992529 ignition[814]: Ignition 2.20.0 Jan 13 20:32:13.992538 ignition[814]: Stage: disks Jan 13 20:32:13.992627 ignition[814]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:13.992633 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:13.993165 ignition[814]: disks: disks passed Jan 13 20:32:13.993192 ignition[814]: Ignition finished successfully Jan 13 20:32:13.993765 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:32:13.994178 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:32:13.994431 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:32:13.994656 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:32:13.994850 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:32:13.995045 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:32:14.003367 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:32:14.015337 systemd-fsck[823]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:32:14.016337 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:32:14.020286 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:32:14.069209 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:32:14.069651 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:32:14.069960 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:32:14.073240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:32:14.074263 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:32:14.074611 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:32:14.074642 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:32:14.074663 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:32:14.078107 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:32:14.078812 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:32:14.083287 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (831) Jan 13 20:32:14.085468 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:14.085485 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:14.085493 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:32:14.089214 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:32:14.090040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:32:14.107028 initrd-setup-root[855]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:32:14.109115 initrd-setup-root[862]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:32:14.111437 initrd-setup-root[869]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:32:14.113907 initrd-setup-root[876]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:32:14.163104 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:32:14.167266 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:32:14.168716 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:32:14.173257 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:14.184237 ignition[943]: INFO : Ignition 2.20.0 Jan 13 20:32:14.184237 ignition[943]: INFO : Stage: mount Jan 13 20:32:14.184237 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:14.185896 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:14.185896 ignition[943]: INFO : mount: mount passed Jan 13 20:32:14.185896 ignition[943]: INFO : Ignition finished successfully Jan 13 20:32:14.185328 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:32:14.189291 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:32:14.190373 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:32:14.740319 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:32:14.746367 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:32:14.760419 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (955) Jan 13 20:32:14.760448 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:14.760465 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:14.762571 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:32:14.766215 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:32:14.767143 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:32:14.784033 ignition[972]: INFO : Ignition 2.20.0 Jan 13 20:32:14.784033 ignition[972]: INFO : Stage: files Jan 13 20:32:14.784495 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:14.784495 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:14.784843 ignition[972]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:32:14.785360 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:32:14.785360 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:32:14.787426 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:32:14.787555 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:32:14.787676 unknown[972]: wrote ssh authorized keys file for user: core Jan 13 20:32:14.787839 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:32:14.789722 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:32:14.789881 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:32:14.829494 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:32:14.905493 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:32:14.905493 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:32:15.380569 systemd-networkd[799]: ens192: Gained IPv6LL Jan 13 20:32:15.405228 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:32:15.804191 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:32:15.804191 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 13 20:32:15.804764 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:32:15.845366 ignition[972]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:32:15.847712 ignition[972]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:32:15.847712 ignition[972]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:32:15.847712 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:32:15.847712 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:32:15.848737 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:32:15.848737 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:32:15.848737 ignition[972]: INFO : files: files passed Jan 13 20:32:15.848737 ignition[972]: INFO : Ignition finished successfully Jan 13 20:32:15.848589 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:32:15.853329 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:32:15.854786 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:32:15.855158 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:32:15.855247 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:32:15.861084 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:32:15.861084 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:32:15.861967 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:32:15.863005 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:32:15.863445 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:32:15.866274 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:32:15.877375 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:32:15.877435 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:32:15.877682 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:32:15.877789 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:32:15.877972 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:32:15.878405 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:32:15.886940 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:32:15.890388 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:32:15.895530 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:32:15.895781 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:32:15.896086 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:32:15.896312 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:32:15.896375 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:32:15.896582 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:32:15.896713 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:32:15.896834 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:32:15.896977 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:32:15.897163 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:32:15.897376 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:32:15.897557 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:32:15.897754 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:32:15.898087 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:32:15.898277 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:32:15.898481 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:32:15.898561 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:32:15.898863 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:32:15.898994 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:32:15.899163 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:32:15.899212 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:32:15.899351 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:32:15.899407 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:32:15.899635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:32:15.899694 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:32:15.899932 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:32:15.900074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:32:15.902334 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:32:15.902517 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:32:15.902709 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:32:15.902883 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:32:15.902977 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:32:15.903179 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:32:15.903235 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:32:15.903495 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:32:15.903574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:32:15.903796 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:32:15.903870 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:32:15.913339 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:32:15.916334 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:32:15.916468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:32:15.916576 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:32:15.916788 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:32:15.916882 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:32:15.921065 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:32:15.921287 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:32:15.923575 ignition[1027]: INFO : Ignition 2.20.0 Jan 13 20:32:15.923575 ignition[1027]: INFO : Stage: umount Jan 13 20:32:15.923853 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:15.923853 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:15.924997 ignition[1027]: INFO : umount: umount passed Jan 13 20:32:15.924997 ignition[1027]: INFO : Ignition finished successfully Jan 13 20:32:15.925314 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:32:15.925362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:32:15.926437 systemd[1]: Stopped target network.target - Network. Jan 13 20:32:15.926530 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:32:15.926562 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:32:15.926679 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:32:15.926702 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:32:15.926811 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:32:15.926832 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:32:15.926926 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:32:15.926948 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:32:15.927112 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:32:15.927268 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:32:15.929608 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:32:15.931307 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:32:15.931359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:32:15.931551 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:32:15.931572 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:32:15.937431 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:32:15.937528 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:32:15.937555 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:32:15.937678 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 13 20:32:15.937699 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 13 20:32:15.937851 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:32:15.938064 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:32:15.939148 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:32:15.941291 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:32:15.941466 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:32:15.941723 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:32:15.941877 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:32:15.942129 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:32:15.942316 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:32:15.947597 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:32:15.947829 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:32:15.948239 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:32:15.948452 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:32:15.948806 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:32:15.948833 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:32:15.948952 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:32:15.948970 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:32:15.949073 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:32:15.949096 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:32:15.950094 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:32:15.950119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:32:15.950317 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:32:15.950340 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:15.960406 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:32:15.960546 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:32:15.960582 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:32:15.960748 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:32:15.960778 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:32:15.960945 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:32:15.960972 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:32:15.961129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:32:15.961156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:15.964214 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:32:15.964314 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:32:16.009771 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:32:16.009845 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:32:16.010346 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:32:16.010519 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:32:16.010558 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:32:16.015392 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:32:16.026147 systemd[1]: Switching root. Jan 13 20:32:16.049925 systemd-journald[217]: Journal stopped Jan 13 20:32:11.728053 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:32:11.728069 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:32:11.728075 kernel: Disabled fast string operations Jan 13 20:32:11.728079 kernel: BIOS-provided physical RAM map: Jan 13 20:32:11.728083 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 13 20:32:11.728087 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 13 20:32:11.728093 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 13 20:32:11.728097 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 13 20:32:11.728101 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 13 20:32:11.728105 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 13 20:32:11.728109 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 13 20:32:11.728113 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 13 20:32:11.728117 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 13 20:32:11.728121 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 20:32:11.728127 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 13 20:32:11.728132 kernel: NX (Execute Disable) protection: active Jan 13 20:32:11.728136 kernel: APIC: Static calls initialized Jan 13 20:32:11.728141 kernel: SMBIOS 2.7 present. Jan 13 20:32:11.728146 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 13 20:32:11.728150 kernel: vmware: hypercall mode: 0x00 Jan 13 20:32:11.728155 kernel: Hypervisor detected: VMware Jan 13 20:32:11.728159 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 13 20:32:11.728165 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 13 20:32:11.728170 kernel: vmware: using clock offset of 2552865280 ns Jan 13 20:32:11.728174 kernel: tsc: Detected 3408.000 MHz processor Jan 13 20:32:11.728179 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:32:11.728184 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:32:11.728189 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 13 20:32:11.728194 kernel: total RAM covered: 3072M Jan 13 20:32:11.728213 kernel: Found optimal setting for mtrr clean up Jan 13 20:32:11.728219 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 13 20:32:11.728224 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 13 20:32:11.728231 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:32:11.728236 kernel: Using GB pages for direct mapping Jan 13 20:32:11.728240 kernel: ACPI: Early table checksum verification disabled Jan 13 20:32:11.728245 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 13 20:32:11.728250 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 13 20:32:11.728254 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 13 20:32:11.728259 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 13 20:32:11.728264 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 13 20:32:11.728271 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 13 20:32:11.728276 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 13 20:32:11.728281 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 13 20:32:11.728286 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 13 20:32:11.728291 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 13 20:32:11.728296 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 13 20:32:11.728302 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 13 20:32:11.728307 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 13 20:32:11.728312 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 13 20:32:11.728317 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 13 20:32:11.728322 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 13 20:32:11.728326 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 13 20:32:11.728331 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 13 20:32:11.728336 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 13 20:32:11.728341 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 13 20:32:11.728347 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 13 20:32:11.728352 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 13 20:32:11.728357 kernel: system APIC only can use physical flat Jan 13 20:32:11.728362 kernel: APIC: Switched APIC routing to: physical flat Jan 13 20:32:11.728367 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:32:11.728372 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 13 20:32:11.728377 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 13 20:32:11.728381 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 13 20:32:11.728386 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 13 20:32:11.728391 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 13 20:32:11.728397 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 13 20:32:11.728402 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 13 20:32:11.728406 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 13 20:32:11.728411 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 13 20:32:11.728416 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 13 20:32:11.728421 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 13 20:32:11.728425 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 13 20:32:11.728430 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 13 20:32:11.728435 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 13 20:32:11.728440 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 13 20:32:11.728445 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 13 20:32:11.728450 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 13 20:32:11.728455 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 13 20:32:11.728460 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 13 20:32:11.728465 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 13 20:32:11.728469 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 13 20:32:11.728474 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 13 20:32:11.728479 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 13 20:32:11.728484 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 13 20:32:11.728488 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 13 20:32:11.728494 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 13 20:32:11.728499 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 13 20:32:11.728504 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 13 20:32:11.728508 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 13 20:32:11.728513 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 13 20:32:11.728518 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 13 20:32:11.728523 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 13 20:32:11.728528 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 13 20:32:11.728532 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 13 20:32:11.728537 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 13 20:32:11.728543 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 13 20:32:11.728548 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 13 20:32:11.728552 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 13 20:32:11.728557 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 13 20:32:11.728562 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 13 20:32:11.728567 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 13 20:32:11.728572 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 13 20:32:11.728576 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 13 20:32:11.728581 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 13 20:32:11.728586 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 13 20:32:11.728592 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 13 20:32:11.728596 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 13 20:32:11.728601 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 13 20:32:11.728606 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 13 20:32:11.728611 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 13 20:32:11.728616 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 13 20:32:11.728620 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 13 20:32:11.728625 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 13 20:32:11.728630 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 13 20:32:11.728635 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 13 20:32:11.728639 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 13 20:32:11.728645 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 13 20:32:11.728650 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 13 20:32:11.728658 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 13 20:32:11.728664 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 13 20:32:11.728669 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 13 20:32:11.728674 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 13 20:32:11.728679 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 13 20:32:11.728685 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 13 20:32:11.728691 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 13 20:32:11.728696 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 13 20:32:11.728701 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 13 20:32:11.728706 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 13 20:32:11.728711 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 13 20:32:11.728716 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 13 20:32:11.728721 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 13 20:32:11.728726 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 13 20:32:11.728731 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 13 20:32:11.728737 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 13 20:32:11.728743 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 13 20:32:11.728748 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 13 20:32:11.728753 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 13 20:32:11.728758 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 13 20:32:11.728763 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 13 20:32:11.728768 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 13 20:32:11.728773 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 13 20:32:11.728779 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 13 20:32:11.728784 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 13 20:32:11.728789 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 13 20:32:11.728795 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 13 20:32:11.728800 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 13 20:32:11.728805 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 13 20:32:11.728810 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 13 20:32:11.728815 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 13 20:32:11.728820 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 13 20:32:11.728826 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 13 20:32:11.728831 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 13 20:32:11.728836 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 13 20:32:11.728841 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 13 20:32:11.728846 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 13 20:32:11.728852 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 13 20:32:11.728857 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 13 20:32:11.728862 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 13 20:32:11.728867 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 13 20:32:11.728872 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 13 20:32:11.728877 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 13 20:32:11.728882 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 13 20:32:11.728887 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 13 20:32:11.728893 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 13 20:32:11.728898 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 13 20:32:11.728904 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 13 20:32:11.728909 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 13 20:32:11.728914 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 13 20:32:11.728919 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 13 20:32:11.728924 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 13 20:32:11.728929 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 13 20:32:11.728934 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 13 20:32:11.728939 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 13 20:32:11.728944 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 13 20:32:11.728950 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 13 20:32:11.728956 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 13 20:32:11.728961 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 13 20:32:11.728966 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 13 20:32:11.728971 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 13 20:32:11.728976 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 13 20:32:11.728981 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 13 20:32:11.728987 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 13 20:32:11.728992 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 13 20:32:11.728997 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 13 20:32:11.729002 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 13 20:32:11.729008 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 13 20:32:11.729013 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 13 20:32:11.729018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 20:32:11.729024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 13 20:32:11.729029 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 13 20:32:11.729034 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 13 20:32:11.729040 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 13 20:32:11.729045 kernel: Zone ranges: Jan 13 20:32:11.729050 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:32:11.729055 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 13 20:32:11.729062 kernel: Normal empty Jan 13 20:32:11.729067 kernel: Movable zone start for each node Jan 13 20:32:11.729072 kernel: Early memory node ranges Jan 13 20:32:11.729077 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 13 20:32:11.729082 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 13 20:32:11.729088 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 13 20:32:11.729093 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 13 20:32:11.729098 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:32:11.729104 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 13 20:32:11.729110 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 13 20:32:11.729115 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 13 20:32:11.729121 kernel: system APIC only can use physical flat Jan 13 20:32:11.729126 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 13 20:32:11.729131 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 20:32:11.729136 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 20:32:11.729141 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 20:32:11.729146 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 20:32:11.729152 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 20:32:11.729157 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 20:32:11.729163 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 20:32:11.729168 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 20:32:11.729173 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 20:32:11.729178 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 20:32:11.729183 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 20:32:11.729189 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 20:32:11.729194 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 20:32:11.729204 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 20:32:11.729210 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 20:32:11.729216 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 20:32:11.729221 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 13 20:32:11.729226 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 13 20:32:11.729231 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 13 20:32:11.729236 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 13 20:32:11.729241 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 13 20:32:11.729247 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 13 20:32:11.729252 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 13 20:32:11.729257 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 13 20:32:11.729262 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 13 20:32:11.729268 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 13 20:32:11.729273 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 13 20:32:11.729279 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 13 20:32:11.729284 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 13 20:32:11.729289 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 13 20:32:11.729294 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 13 20:32:11.729299 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 13 20:32:11.729304 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 13 20:32:11.729309 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 13 20:32:11.729314 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 13 20:32:11.729320 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 13 20:32:11.729326 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 13 20:32:11.729331 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 13 20:32:11.729336 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 13 20:32:11.729341 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 13 20:32:11.729346 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 13 20:32:11.729351 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 13 20:32:11.729357 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 13 20:32:11.729362 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 13 20:32:11.729367 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 13 20:32:11.729373 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 13 20:32:11.729378 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 13 20:32:11.729383 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 13 20:32:11.729388 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 13 20:32:11.729393 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 13 20:32:11.729398 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 13 20:32:11.729404 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 13 20:32:11.729409 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 13 20:32:11.729414 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 13 20:32:11.729420 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 13 20:32:11.729425 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 13 20:32:11.729430 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 13 20:32:11.729435 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 13 20:32:11.729440 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 13 20:32:11.729446 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 13 20:32:11.729451 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 13 20:32:11.729456 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 13 20:32:11.729461 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 13 20:32:11.729466 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 13 20:32:11.729472 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 13 20:32:11.729478 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 13 20:32:11.729483 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 13 20:32:11.729488 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 13 20:32:11.729493 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 13 20:32:11.729499 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 13 20:32:11.729504 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 13 20:32:11.729509 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 13 20:32:11.729514 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 13 20:32:11.729519 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 13 20:32:11.729525 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 13 20:32:11.729531 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 13 20:32:11.729536 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 13 20:32:11.729541 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 13 20:32:11.729546 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 13 20:32:11.729551 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 13 20:32:11.729556 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 13 20:32:11.729561 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 13 20:32:11.729566 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 13 20:32:11.729573 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 13 20:32:11.729587 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 13 20:32:11.729594 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 13 20:32:11.729599 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 13 20:32:11.729604 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 13 20:32:11.729609 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 13 20:32:11.729614 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 13 20:32:11.729619 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 13 20:32:11.729624 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 13 20:32:11.729629 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 13 20:32:11.729636 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 13 20:32:11.729642 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 13 20:32:11.729647 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 13 20:32:11.729652 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 13 20:32:11.729657 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 13 20:32:11.729662 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 13 20:32:11.729667 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 13 20:32:11.729673 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 13 20:32:11.729678 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 13 20:32:11.729683 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 13 20:32:11.729689 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 13 20:32:11.729694 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 13 20:32:11.729699 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 13 20:32:11.729704 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 13 20:32:11.729710 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 13 20:32:11.729715 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 13 20:32:11.729720 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 13 20:32:11.729725 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 13 20:32:11.729730 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 13 20:32:11.729735 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 13 20:32:11.729741 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 13 20:32:11.729746 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 13 20:32:11.729751 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 13 20:32:11.729756 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 13 20:32:11.729762 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 13 20:32:11.729767 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 13 20:32:11.729772 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 13 20:32:11.729777 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 13 20:32:11.729782 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 13 20:32:11.729788 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 13 20:32:11.729794 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 13 20:32:11.729799 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 13 20:32:11.729804 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 13 20:32:11.729809 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 13 20:32:11.729814 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:32:11.729819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 13 20:32:11.729825 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:32:11.729830 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 13 20:32:11.729835 kernel: TSC deadline timer available Jan 13 20:32:11.729842 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 13 20:32:11.729847 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 13 20:32:11.729852 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 13 20:32:11.729858 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:32:11.729863 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 13 20:32:11.729868 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 20:32:11.729873 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 20:32:11.729879 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 13 20:32:11.729888 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 13 20:32:11.729896 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 13 20:32:11.729901 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 13 20:32:11.729906 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 13 20:32:11.729918 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 13 20:32:11.729925 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 13 20:32:11.729930 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 13 20:32:11.729935 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 13 20:32:11.729941 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 13 20:32:11.729947 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 13 20:32:11.729953 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 13 20:32:11.729958 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 13 20:32:11.729964 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 13 20:32:11.729969 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 13 20:32:11.729975 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 13 20:32:11.729981 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:32:11.729987 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:32:11.729993 kernel: random: crng init done Jan 13 20:32:11.729998 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 13 20:32:11.730004 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 13 20:32:11.730010 kernel: printk: log_buf_len min size: 262144 bytes Jan 13 20:32:11.730015 kernel: printk: log_buf_len: 1048576 bytes Jan 13 20:32:11.730021 kernel: printk: early log buf free: 239648(91%) Jan 13 20:32:11.730026 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:32:11.730032 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:32:11.730037 kernel: Fallback order for Node 0: 0 Jan 13 20:32:11.730045 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 13 20:32:11.730050 kernel: Policy zone: DMA32 Jan 13 20:32:11.730056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:32:11.730062 kernel: Memory: 1934328K/2096628K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 162040K reserved, 0K cma-reserved) Jan 13 20:32:11.730068 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 13 20:32:11.730074 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:32:11.730081 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:32:11.730086 kernel: Dynamic Preempt: voluntary Jan 13 20:32:11.730092 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:32:11.730098 kernel: rcu: RCU event tracing is enabled. Jan 13 20:32:11.730103 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 13 20:32:11.730109 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:32:11.730115 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:32:11.730120 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:32:11.730126 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:32:11.730132 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 13 20:32:11.730138 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 13 20:32:11.730144 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 13 20:32:11.730149 kernel: Console: colour VGA+ 80x25 Jan 13 20:32:11.730155 kernel: printk: console [tty0] enabled Jan 13 20:32:11.730160 kernel: printk: console [ttyS0] enabled Jan 13 20:32:11.730166 kernel: ACPI: Core revision 20230628 Jan 13 20:32:11.730171 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 13 20:32:11.730177 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:32:11.730182 kernel: x2apic enabled Jan 13 20:32:11.730189 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:32:11.730231 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:32:11.730240 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 13 20:32:11.730245 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 13 20:32:11.730251 kernel: Disabled fast string operations Jan 13 20:32:11.730256 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:32:11.730262 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:32:11.730268 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:32:11.730273 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 20:32:11.730281 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 20:32:11.730287 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 20:32:11.730292 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:32:11.730298 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 20:32:11.730303 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 20:32:11.730309 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:32:11.730315 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:32:11.730320 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:32:11.730327 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 13 20:32:11.730332 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:32:11.730338 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:32:11.730344 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:32:11.730349 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:32:11.730355 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:32:11.730360 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:32:11.730366 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:32:11.730372 kernel: pid_max: default: 131072 minimum: 1024 Jan 13 20:32:11.730378 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:32:11.730384 kernel: landlock: Up and running. Jan 13 20:32:11.730390 kernel: SELinux: Initializing. Jan 13 20:32:11.730395 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:32:11.730401 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:32:11.730407 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 20:32:11.730412 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 20:32:11.730418 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 20:32:11.730424 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 20:32:11.730430 kernel: Performance Events: Skylake events, core PMU driver. Jan 13 20:32:11.730436 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 13 20:32:11.730442 kernel: core: CPUID marked event: 'instructions' unavailable Jan 13 20:32:11.730447 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 13 20:32:11.730452 kernel: core: CPUID marked event: 'cache references' unavailable Jan 13 20:32:11.730458 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 13 20:32:11.730463 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 13 20:32:11.730470 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 13 20:32:11.730476 kernel: ... version: 1 Jan 13 20:32:11.730482 kernel: ... bit width: 48 Jan 13 20:32:11.730487 kernel: ... generic registers: 4 Jan 13 20:32:11.730493 kernel: ... value mask: 0000ffffffffffff Jan 13 20:32:11.730498 kernel: ... max period: 000000007fffffff Jan 13 20:32:11.730504 kernel: ... fixed-purpose events: 0 Jan 13 20:32:11.730509 kernel: ... event mask: 000000000000000f Jan 13 20:32:11.730515 kernel: signal: max sigframe size: 1776 Jan 13 20:32:11.730521 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:32:11.730526 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:32:11.730533 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:32:11.730538 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:32:11.730544 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:32:11.730550 kernel: .... node #0, CPUs: #1 Jan 13 20:32:11.730555 kernel: Disabled fast string operations Jan 13 20:32:11.730561 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 13 20:32:11.730566 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 13 20:32:11.730572 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:32:11.730577 kernel: smpboot: Max logical packages: 128 Jan 13 20:32:11.730583 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 13 20:32:11.730589 kernel: devtmpfs: initialized Jan 13 20:32:11.730595 kernel: x86/mm: Memory block size: 128MB Jan 13 20:32:11.730600 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 13 20:32:11.730606 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:32:11.730612 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 13 20:32:11.730617 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:32:11.730623 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:32:11.730628 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:32:11.730634 kernel: audit: type=2000 audit(1736800330.066:1): state=initialized audit_enabled=0 res=1 Jan 13 20:32:11.730640 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:32:11.730646 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:32:11.730651 kernel: cpuidle: using governor menu Jan 13 20:32:11.730657 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 13 20:32:11.730663 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:32:11.730668 kernel: dca service started, version 1.12.1 Jan 13 20:32:11.730674 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 13 20:32:11.730679 kernel: PCI: Using configuration type 1 for base access Jan 13 20:32:11.730686 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:32:11.730692 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:32:11.730698 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:32:11.730703 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:32:11.730709 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:32:11.730715 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:32:11.730720 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:32:11.730726 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:32:11.730731 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:32:11.730738 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:32:11.730743 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 13 20:32:11.730749 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:32:11.730755 kernel: ACPI: Interpreter enabled Jan 13 20:32:11.730760 kernel: ACPI: PM: (supports S0 S1 S5) Jan 13 20:32:11.730766 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:32:11.730771 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:32:11.730777 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:32:11.730782 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 13 20:32:11.730789 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 13 20:32:11.730864 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:32:11.730958 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 13 20:32:11.731009 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 13 20:32:11.731017 kernel: PCI host bridge to bus 0000:00 Jan 13 20:32:11.731064 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:32:11.731107 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 13 20:32:11.731151 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:32:11.731193 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:32:11.731243 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 13 20:32:11.731285 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 13 20:32:11.731341 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 13 20:32:11.731395 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 13 20:32:11.731452 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 13 20:32:11.731505 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 13 20:32:11.731555 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 13 20:32:11.731603 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 20:32:11.731650 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 20:32:11.731699 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 20:32:11.731746 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 20:32:11.731802 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 13 20:32:11.731850 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 13 20:32:11.731897 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 13 20:32:11.731948 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 13 20:32:11.731996 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 13 20:32:11.732044 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 13 20:32:11.732097 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 13 20:32:11.732145 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 13 20:32:11.732193 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 13 20:32:11.732546 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 13 20:32:11.732597 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 13 20:32:11.732646 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:32:11.732699 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 13 20:32:11.732755 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.732804 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.732856 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.732908 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.733018 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.733067 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.733124 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.733172 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.733274 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.733340 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734291 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734346 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734403 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734453 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734506 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734555 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734607 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734655 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734709 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734757 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734809 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734857 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.734910 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.734958 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.735012 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.735060 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.735111 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.735158 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737255 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737316 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737377 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737428 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737482 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737531 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737585 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737634 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737689 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737738 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737793 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737842 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737893 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.737941 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.737993 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738044 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738095 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738143 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738202 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738255 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738308 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738360 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738411 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738459 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738511 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738559 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738613 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738669 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738724 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738772 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738824 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738873 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.738924 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.738975 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.739027 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 13 20:32:11.739075 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.739124 kernel: pci_bus 0000:01: extended config space not accessible Jan 13 20:32:11.739174 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 20:32:11.739967 kernel: pci_bus 0000:02: extended config space not accessible Jan 13 20:32:11.739979 kernel: acpiphp: Slot [32] registered Jan 13 20:32:11.739987 kernel: acpiphp: Slot [33] registered Jan 13 20:32:11.739993 kernel: acpiphp: Slot [34] registered Jan 13 20:32:11.739999 kernel: acpiphp: Slot [35] registered Jan 13 20:32:11.740004 kernel: acpiphp: Slot [36] registered Jan 13 20:32:11.740010 kernel: acpiphp: Slot [37] registered Jan 13 20:32:11.740015 kernel: acpiphp: Slot [38] registered Jan 13 20:32:11.740021 kernel: acpiphp: Slot [39] registered Jan 13 20:32:11.740026 kernel: acpiphp: Slot [40] registered Jan 13 20:32:11.740032 kernel: acpiphp: Slot [41] registered Jan 13 20:32:11.740039 kernel: acpiphp: Slot [42] registered Jan 13 20:32:11.740044 kernel: acpiphp: Slot [43] registered Jan 13 20:32:11.740050 kernel: acpiphp: Slot [44] registered Jan 13 20:32:11.740055 kernel: acpiphp: Slot [45] registered Jan 13 20:32:11.740061 kernel: acpiphp: Slot [46] registered Jan 13 20:32:11.740066 kernel: acpiphp: Slot [47] registered Jan 13 20:32:11.740076 kernel: acpiphp: Slot [48] registered Jan 13 20:32:11.740082 kernel: acpiphp: Slot [49] registered Jan 13 20:32:11.740088 kernel: acpiphp: Slot [50] registered Jan 13 20:32:11.740095 kernel: acpiphp: Slot [51] registered Jan 13 20:32:11.740100 kernel: acpiphp: Slot [52] registered Jan 13 20:32:11.740106 kernel: acpiphp: Slot [53] registered Jan 13 20:32:11.740111 kernel: acpiphp: Slot [54] registered Jan 13 20:32:11.740117 kernel: acpiphp: Slot [55] registered Jan 13 20:32:11.740122 kernel: acpiphp: Slot [56] registered Jan 13 20:32:11.740128 kernel: acpiphp: Slot [57] registered Jan 13 20:32:11.740133 kernel: acpiphp: Slot [58] registered Jan 13 20:32:11.740139 kernel: acpiphp: Slot [59] registered Jan 13 20:32:11.740144 kernel: acpiphp: Slot [60] registered Jan 13 20:32:11.740151 kernel: acpiphp: Slot [61] registered Jan 13 20:32:11.740156 kernel: acpiphp: Slot [62] registered Jan 13 20:32:11.740162 kernel: acpiphp: Slot [63] registered Jan 13 20:32:11.740255 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 13 20:32:11.740309 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 13 20:32:11.740358 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 13 20:32:11.740406 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 20:32:11.740454 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 13 20:32:11.740505 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 13 20:32:11.740552 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 13 20:32:11.740600 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 13 20:32:11.740647 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 13 20:32:11.740701 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 13 20:32:11.740751 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 13 20:32:11.740800 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 13 20:32:11.740853 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 13 20:32:11.740926 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 20:32:11.741018 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 13 20:32:11.741082 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 13 20:32:11.741146 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 13 20:32:11.741194 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 13 20:32:11.741250 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 13 20:32:11.741299 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 13 20:32:11.741350 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 13 20:32:11.741397 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 20:32:11.741445 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 13 20:32:11.741493 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 13 20:32:11.741540 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 13 20:32:11.741588 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 20:32:11.741636 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 13 20:32:11.741687 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 13 20:32:11.741794 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 20:32:11.741911 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 13 20:32:11.741964 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 13 20:32:11.742016 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 20:32:11.742071 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 13 20:32:11.742123 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 13 20:32:11.742173 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 20:32:11.742276 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 13 20:32:11.742327 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 13 20:32:11.742377 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 20:32:11.742426 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 13 20:32:11.742476 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 13 20:32:11.742528 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 20:32:11.742584 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 13 20:32:11.742636 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 13 20:32:11.742688 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 13 20:32:11.742739 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 13 20:32:11.742790 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 13 20:32:11.742841 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 13 20:32:11.742913 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 13 20:32:11.742981 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:32:11.743032 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 13 20:32:11.743082 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 13 20:32:11.743132 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 13 20:32:11.743181 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 13 20:32:11.743245 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 13 20:32:11.743296 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 13 20:32:11.743350 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 13 20:32:11.743400 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 20:32:11.743450 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 13 20:32:11.743500 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 13 20:32:11.743549 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 13 20:32:11.743599 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 20:32:11.743649 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 13 20:32:11.743702 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 13 20:32:11.743752 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 20:32:11.743802 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 13 20:32:11.743852 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 13 20:32:11.743905 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 20:32:11.743956 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 13 20:32:11.744006 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 13 20:32:11.744056 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 20:32:11.744109 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 13 20:32:11.744159 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 13 20:32:11.744487 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 20:32:11.744545 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 13 20:32:11.744596 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 13 20:32:11.744646 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 20:32:11.744695 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 13 20:32:11.744744 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 13 20:32:11.744795 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 13 20:32:11.744844 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 20:32:11.744893 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 13 20:32:11.744957 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 13 20:32:11.745042 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 13 20:32:11.745094 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 20:32:11.745145 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 13 20:32:11.745228 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 13 20:32:11.745285 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 13 20:32:11.745336 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 20:32:11.745386 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 13 20:32:11.745436 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 13 20:32:11.745485 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 20:32:11.745536 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 13 20:32:11.745585 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 13 20:32:11.745635 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 20:32:11.745688 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 13 20:32:11.745738 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 13 20:32:11.745787 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 20:32:11.745837 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 13 20:32:11.745886 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 13 20:32:11.745935 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 20:32:11.745985 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 13 20:32:11.746034 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 13 20:32:11.746086 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 20:32:11.746136 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 13 20:32:11.746186 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 13 20:32:11.746254 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 13 20:32:11.746306 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 20:32:11.746358 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 13 20:32:11.746407 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 13 20:32:11.746461 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 13 20:32:11.746510 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 20:32:11.746560 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 13 20:32:11.746610 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 13 20:32:11.746660 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 20:32:11.746710 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 13 20:32:11.746760 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 13 20:32:11.746809 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 20:32:11.746861 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 13 20:32:11.746920 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 13 20:32:11.746971 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 20:32:11.747021 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 13 20:32:11.747071 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 13 20:32:11.747120 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 20:32:11.747170 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 13 20:32:11.747289 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 13 20:32:11.747342 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 20:32:11.747391 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 13 20:32:11.747438 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 13 20:32:11.747485 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 20:32:11.747493 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 13 20:32:11.747500 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 13 20:32:11.747506 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 13 20:32:11.747511 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:32:11.747517 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 13 20:32:11.747525 kernel: iommu: Default domain type: Translated Jan 13 20:32:11.747531 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:32:11.747536 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:32:11.747542 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:32:11.747548 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 13 20:32:11.747554 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 13 20:32:11.747599 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 13 20:32:11.747647 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 13 20:32:11.747694 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:32:11.747704 kernel: vgaarb: loaded Jan 13 20:32:11.747710 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 13 20:32:11.747716 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 13 20:32:11.747721 kernel: clocksource: Switched to clocksource tsc-early Jan 13 20:32:11.747727 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:32:11.747733 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:32:11.747742 kernel: pnp: PnP ACPI init Jan 13 20:32:11.747830 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 13 20:32:11.747881 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 13 20:32:11.747926 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 13 20:32:11.747974 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 13 20:32:11.748020 kernel: pnp 00:06: [dma 2] Jan 13 20:32:11.748068 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 13 20:32:11.748112 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 13 20:32:11.748159 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 13 20:32:11.748167 kernel: pnp: PnP ACPI: found 8 devices Jan 13 20:32:11.748173 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:32:11.748179 kernel: NET: Registered PF_INET protocol family Jan 13 20:32:11.748185 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:32:11.748190 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:32:11.748236 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:32:11.748243 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:32:11.748249 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:32:11.748257 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:32:11.748279 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:32:11.748285 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:32:11.748310 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:32:11.748316 kernel: NET: Registered PF_XDP protocol family Jan 13 20:32:11.748397 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 13 20:32:11.748447 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:32:11.748496 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:32:11.748548 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:32:11.748597 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:32:11.748646 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 13 20:32:11.748695 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 13 20:32:11.748744 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 13 20:32:11.748796 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 13 20:32:11.748844 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 13 20:32:11.748893 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 13 20:32:11.748941 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 13 20:32:11.748989 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 13 20:32:11.749038 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 13 20:32:11.749089 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 13 20:32:11.749138 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 13 20:32:11.749186 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 13 20:32:11.749280 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 13 20:32:11.749330 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 13 20:32:11.749379 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 13 20:32:11.749430 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 13 20:32:11.749478 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 13 20:32:11.749526 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 13 20:32:11.749574 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 20:32:11.749621 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 20:32:11.749669 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.749719 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.749769 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.749817 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.749865 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.749916 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.749964 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750011 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750059 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750107 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750157 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750222 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750273 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750331 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750380 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750429 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750476 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750525 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750576 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750623 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750671 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750719 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750766 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750815 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750863 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.750948 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.750999 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.751047 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.751095 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.751142 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.751189 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752270 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752336 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752388 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752441 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752490 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752539 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752588 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752635 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752683 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752775 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752841 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.752893 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.752976 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.753026 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.753075 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.753124 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.753173 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754185 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754257 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754310 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754363 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754412 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754461 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754509 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754558 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754606 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754653 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754701 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754749 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754797 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754847 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.754895 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.754943 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755006 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755054 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755102 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755151 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755211 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755261 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755312 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755359 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755407 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755454 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755501 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755549 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755597 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755645 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755693 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755741 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755791 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755839 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755887 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 13 20:32:11.755935 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 13 20:32:11.755986 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 20:32:11.756034 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 13 20:32:11.756083 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 13 20:32:11.756147 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 13 20:32:11.756663 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 20:32:11.756733 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 13 20:32:11.756788 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 13 20:32:11.756839 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 13 20:32:11.756896 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 13 20:32:11.756969 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 20:32:11.757019 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 13 20:32:11.757066 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 13 20:32:11.757114 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 13 20:32:11.757164 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 20:32:11.757225 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 13 20:32:11.757276 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 13 20:32:11.757324 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 13 20:32:11.757371 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 20:32:11.757419 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 13 20:32:11.757467 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 13 20:32:11.757515 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 20:32:11.757562 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 13 20:32:11.757610 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 13 20:32:11.757661 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 20:32:11.757712 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 13 20:32:11.757761 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 13 20:32:11.757809 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 20:32:11.757856 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 13 20:32:11.757913 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 13 20:32:11.757969 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 20:32:11.758018 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 13 20:32:11.758066 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 13 20:32:11.758114 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 20:32:11.758178 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 13 20:32:11.758280 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 13 20:32:11.758329 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 13 20:32:11.758379 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 13 20:32:11.758427 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 20:32:11.758478 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 13 20:32:11.758526 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 13 20:32:11.758574 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 13 20:32:11.758622 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 20:32:11.758671 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 13 20:32:11.758719 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 13 20:32:11.758766 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 13 20:32:11.758814 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 20:32:11.758862 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 13 20:32:11.758961 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 13 20:32:11.759010 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 20:32:11.759058 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 13 20:32:11.759106 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 13 20:32:11.759154 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 20:32:11.759231 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 13 20:32:11.759281 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 13 20:32:11.759330 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 20:32:11.759378 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 13 20:32:11.759426 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 13 20:32:11.759477 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 20:32:11.759525 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 13 20:32:11.759573 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 13 20:32:11.759621 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 20:32:11.759669 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 13 20:32:11.759717 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 13 20:32:11.759765 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 13 20:32:11.759813 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 20:32:11.759860 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 13 20:32:11.759949 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 13 20:32:11.759998 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 13 20:32:11.760046 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 20:32:11.760094 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 13 20:32:11.760142 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 13 20:32:11.760190 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 13 20:32:11.760257 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 20:32:11.760306 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 13 20:32:11.760354 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 13 20:32:11.760402 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 20:32:11.760453 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 13 20:32:11.760501 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 13 20:32:11.760549 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 20:32:11.760597 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 13 20:32:11.760645 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 13 20:32:11.760693 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 20:32:11.760741 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 13 20:32:11.760789 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 13 20:32:11.760837 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 20:32:11.760911 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 13 20:32:11.760974 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 13 20:32:11.761022 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 20:32:11.761079 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 13 20:32:11.761128 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 13 20:32:11.761176 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 13 20:32:11.761243 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 20:32:11.761294 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 13 20:32:11.761343 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 13 20:32:11.761393 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 13 20:32:11.761450 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 20:32:11.761502 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 13 20:32:11.761551 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 13 20:32:11.761599 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 20:32:11.761648 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 13 20:32:11.761696 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 13 20:32:11.761744 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 20:32:11.761793 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 13 20:32:11.761841 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 13 20:32:11.761893 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 20:32:11.761973 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 13 20:32:11.762021 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 13 20:32:11.762069 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 20:32:11.762117 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 13 20:32:11.762166 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 13 20:32:11.762251 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 20:32:11.762300 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 13 20:32:11.762349 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 13 20:32:11.762397 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 20:32:11.762448 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 13 20:32:11.762492 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 13 20:32:11.762534 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 13 20:32:11.762577 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 13 20:32:11.762630 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 13 20:32:11.762680 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 13 20:32:11.762724 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 13 20:32:11.762804 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 20:32:11.762851 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 13 20:32:11.762916 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 13 20:32:11.762990 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 13 20:32:11.763034 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 13 20:32:11.763079 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 13 20:32:11.763127 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 13 20:32:11.763175 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 13 20:32:11.763237 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 20:32:11.763287 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 13 20:32:11.763332 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 13 20:32:11.763377 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 20:32:11.763427 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 13 20:32:11.763472 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 13 20:32:11.763525 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 20:32:11.763580 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 13 20:32:11.763625 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 20:32:11.763672 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 13 20:32:11.763717 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 20:32:11.763764 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 13 20:32:11.763809 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 20:32:11.763859 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 13 20:32:11.763924 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 20:32:11.763975 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 13 20:32:11.764029 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 20:32:11.764080 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 13 20:32:11.764133 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 13 20:32:11.764179 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 20:32:11.764256 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 13 20:32:11.764303 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 13 20:32:11.764350 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 20:32:11.764402 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 13 20:32:11.764449 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 13 20:32:11.764500 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 20:32:11.764550 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 13 20:32:11.764595 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 20:32:11.764645 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 13 20:32:11.764691 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 20:32:11.764739 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 13 20:32:11.764788 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 20:32:11.764837 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 13 20:32:11.764883 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 20:32:11.764932 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 13 20:32:11.764979 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 20:32:11.765028 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 13 20:32:11.765076 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 13 20:32:11.765122 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 20:32:11.765171 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 13 20:32:11.765229 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 13 20:32:11.765276 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 20:32:11.765326 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 13 20:32:11.765372 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 13 20:32:11.765525 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 20:32:11.765753 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 13 20:32:11.765803 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 20:32:11.765881 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 13 20:32:11.765930 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 20:32:11.765980 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 13 20:32:11.766026 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 20:32:11.766079 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 13 20:32:11.766125 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 20:32:11.766174 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 13 20:32:11.766231 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 20:32:11.766285 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 13 20:32:11.766333 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 13 20:32:11.766379 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 20:32:11.766427 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 13 20:32:11.766473 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 13 20:32:11.766518 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 20:32:11.766567 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 13 20:32:11.766612 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 20:32:11.766667 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 13 20:32:11.766714 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 20:32:11.766763 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 13 20:32:11.766809 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 20:32:11.766858 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 13 20:32:11.766922 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 20:32:11.766974 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 13 20:32:11.767036 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 20:32:11.767100 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 13 20:32:11.767146 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 20:32:11.767204 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:32:11.767215 kernel: PCI: CLS 32 bytes, default 64 Jan 13 20:32:11.767221 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:32:11.767229 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 13 20:32:11.767236 kernel: clocksource: Switched to clocksource tsc Jan 13 20:32:11.767242 kernel: Initialise system trusted keyrings Jan 13 20:32:11.767249 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:32:11.767256 kernel: Key type asymmetric registered Jan 13 20:32:11.767261 kernel: Asymmetric key parser 'x509' registered Jan 13 20:32:11.767267 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:32:11.767273 kernel: io scheduler mq-deadline registered Jan 13 20:32:11.767279 kernel: io scheduler kyber registered Jan 13 20:32:11.767286 kernel: io scheduler bfq registered Jan 13 20:32:11.767336 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 13 20:32:11.767387 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767453 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 13 20:32:11.767507 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767557 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 13 20:32:11.767607 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767657 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 13 20:32:11.767711 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767767 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 13 20:32:11.767817 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767866 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 13 20:32:11.767915 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.767967 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 13 20:32:11.768022 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768077 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 13 20:32:11.768127 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768176 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 13 20:32:11.768242 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768292 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 13 20:32:11.768342 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768391 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 13 20:32:11.768440 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768495 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 13 20:32:11.768564 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768619 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 13 20:32:11.768668 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768717 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 13 20:32:11.768766 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768815 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 13 20:32:11.768866 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.768916 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 13 20:32:11.768965 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769015 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 13 20:32:11.769069 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769144 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 13 20:32:11.769229 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769286 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 13 20:32:11.769335 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769383 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 13 20:32:11.769432 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769482 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 13 20:32:11.769531 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769582 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 13 20:32:11.769632 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769681 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 13 20:32:11.769730 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769780 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 13 20:32:11.769832 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769880 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 13 20:32:11.769933 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.769981 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 13 20:32:11.770029 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770078 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 13 20:32:11.770144 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770203 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 13 20:32:11.770259 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770309 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 13 20:32:11.770358 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770414 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 13 20:32:11.770476 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770526 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 13 20:32:11.770575 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770624 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 13 20:32:11.770673 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 20:32:11.770684 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:32:11.770691 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:32:11.770697 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:32:11.770703 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 13 20:32:11.770709 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:32:11.770715 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:32:11.770764 kernel: rtc_cmos 00:01: registered as rtc0 Jan 13 20:32:11.770810 kernel: rtc_cmos 00:01: setting system clock to 2025-01-13T20:32:11 UTC (1736800331) Jan 13 20:32:11.770857 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 13 20:32:11.770866 kernel: intel_pstate: CPU model not supported Jan 13 20:32:11.770872 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:32:11.770879 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:32:11.770889 kernel: Segment Routing with IPv6 Jan 13 20:32:11.770913 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:32:11.770920 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:32:11.770926 kernel: Key type dns_resolver registered Jan 13 20:32:11.770934 kernel: IPI shorthand broadcast: enabled Jan 13 20:32:11.770940 kernel: sched_clock: Marking stable (861003560, 220489294)->(1134660177, -53167323) Jan 13 20:32:11.770946 kernel: registered taskstats version 1 Jan 13 20:32:11.770952 kernel: Loading compiled-in X.509 certificates Jan 13 20:32:11.770958 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:32:11.770964 kernel: Key type .fscrypt registered Jan 13 20:32:11.770984 kernel: Key type fscrypt-provisioning registered Jan 13 20:32:11.770990 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:32:11.770996 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:32:11.771003 kernel: ima: No architecture policies found Jan 13 20:32:11.771009 kernel: clk: Disabling unused clocks Jan 13 20:32:11.771015 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:32:11.771021 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:32:11.771027 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:32:11.771033 kernel: Run /init as init process Jan 13 20:32:11.771038 kernel: with arguments: Jan 13 20:32:11.771045 kernel: /init Jan 13 20:32:11.771050 kernel: with environment: Jan 13 20:32:11.771057 kernel: HOME=/ Jan 13 20:32:11.771063 kernel: TERM=linux Jan 13 20:32:11.771069 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:32:11.771076 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:32:11.771084 systemd[1]: Detected virtualization vmware. Jan 13 20:32:11.771090 systemd[1]: Detected architecture x86-64. Jan 13 20:32:11.771096 systemd[1]: Running in initrd. Jan 13 20:32:11.771102 systemd[1]: No hostname configured, using default hostname. Jan 13 20:32:11.771110 systemd[1]: Hostname set to . Jan 13 20:32:11.771116 systemd[1]: Initializing machine ID from random generator. Jan 13 20:32:11.771123 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:32:11.771129 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:32:11.771135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:32:11.771142 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:32:11.771148 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:32:11.771154 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:32:11.771161 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:32:11.771168 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:32:11.771175 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:32:11.771181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:32:11.771187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:32:11.771193 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:32:11.771284 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:32:11.771293 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:32:11.771299 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:32:11.771305 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:32:11.771312 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:32:11.771318 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:32:11.771324 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:32:11.771330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:32:11.771336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:32:11.771342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:32:11.771350 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:32:11.771356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:32:11.771363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:32:11.771369 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:32:11.771375 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:32:11.771381 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:32:11.771387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:32:11.771393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:11.771401 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:32:11.771419 systemd-journald[217]: Collecting audit messages is disabled. Jan 13 20:32:11.771436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:32:11.771442 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:32:11.771450 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:32:11.771457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:11.771463 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:32:11.771469 kernel: Bridge firewalling registered Jan 13 20:32:11.771476 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:32:11.771483 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:32:11.771490 systemd-journald[217]: Journal started Jan 13 20:32:11.771504 systemd-journald[217]: Runtime Journal (/run/log/journal/aff5646d89484528aec676137f0a08e0) is 4.8M, max 38.6M, 33.8M free. Jan 13 20:32:11.744296 systemd-modules-load[218]: Inserted module 'overlay' Jan 13 20:32:11.769084 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 13 20:32:11.773497 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:32:11.774210 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:32:11.779344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:32:11.780181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:32:11.786266 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:32:11.786551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:11.786795 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:32:11.789349 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:32:11.797355 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:32:11.800289 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:32:11.800568 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:32:11.804982 dracut-cmdline[249]: dracut-dracut-053 Jan 13 20:32:11.809060 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:32:11.820136 systemd-resolved[250]: Positive Trust Anchors: Jan 13 20:32:11.820143 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:32:11.820165 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:32:11.821993 systemd-resolved[250]: Defaulting to hostname 'linux'. Jan 13 20:32:11.822582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:32:11.822709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:32:11.850207 kernel: SCSI subsystem initialized Jan 13 20:32:11.855206 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:32:11.862209 kernel: iscsi: registered transport (tcp) Jan 13 20:32:11.874207 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:32:11.874222 kernel: QLogic iSCSI HBA Driver Jan 13 20:32:11.892738 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:32:11.896314 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:32:11.910225 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:32:11.910245 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:32:11.910254 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:32:11.942242 kernel: raid6: avx2x4 gen() 49340 MB/s Jan 13 20:32:11.958240 kernel: raid6: avx2x2 gen() 55653 MB/s Jan 13 20:32:11.975395 kernel: raid6: avx2x1 gen() 46705 MB/s Jan 13 20:32:11.975412 kernel: raid6: using algorithm avx2x2 gen() 55653 MB/s Jan 13 20:32:11.993395 kernel: raid6: .... xor() 33429 MB/s, rmw enabled Jan 13 20:32:11.993417 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:32:12.006204 kernel: xor: automatically using best checksumming function avx Jan 13 20:32:12.091211 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:32:12.096376 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:32:12.101289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:32:12.108309 systemd-udevd[434]: Using default interface naming scheme 'v255'. Jan 13 20:32:12.110617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:32:12.121321 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:32:12.127925 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Jan 13 20:32:12.142840 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:32:12.150413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:32:12.217349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:32:12.222284 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:32:12.229421 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:32:12.230082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:32:12.230840 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:32:12.231240 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:32:12.236304 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:32:12.243278 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:32:12.281206 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 13 20:32:12.283205 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 13 20:32:12.287327 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 13 20:32:12.291231 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 13 20:32:12.295865 kernel: vmw_pvscsi: using 64bit dma Jan 13 20:32:12.295883 kernel: vmw_pvscsi: max_id: 16 Jan 13 20:32:12.295892 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 13 20:32:12.298215 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 13 20:32:12.298232 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 13 20:32:12.298244 kernel: vmw_pvscsi: using MSI-X Jan 13 20:32:12.301240 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 13 20:32:12.304293 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 13 20:32:12.309370 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:32:12.309380 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 13 20:32:12.309458 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 13 20:32:12.315516 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:32:12.315736 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:12.316074 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:32:12.316173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:32:12.316258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:12.316370 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:12.322531 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:32:12.322550 kernel: AES CTR mode by8 optimization enabled Jan 13 20:32:12.322667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:12.323205 kernel: libata version 3.00 loaded. Jan 13 20:32:12.325204 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 13 20:32:12.333087 kernel: scsi host1: ata_piix Jan 13 20:32:12.333158 kernel: scsi host2: ata_piix Jan 13 20:32:12.333267 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 13 20:32:12.333276 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 13 20:32:12.341782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:12.345292 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:32:12.357415 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:12.499217 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 13 20:32:12.506214 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 13 20:32:12.515628 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 13 20:32:12.523099 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 13 20:32:12.523375 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 13 20:32:12.523463 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 13 20:32:12.523541 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 13 20:32:12.523615 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:32:12.523626 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 13 20:32:12.540233 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 13 20:32:12.551408 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:32:12.551422 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (495) Jan 13 20:32:12.551430 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:32:12.555294 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (490) Jan 13 20:32:12.554039 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 13 20:32:12.557839 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 13 20:32:12.560470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 13 20:32:12.562594 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 13 20:32:12.562713 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 13 20:32:12.570289 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:32:12.596478 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:32:13.618725 disk-uuid[593]: The operation has completed successfully. Jan 13 20:32:13.621208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:32:13.655767 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:32:13.656037 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:32:13.659306 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:32:13.660994 sh[611]: Success Jan 13 20:32:13.669209 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:32:13.714000 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:32:13.726097 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:32:13.726466 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:32:13.741762 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:32:13.741785 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:13.741794 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:32:13.742921 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:32:13.744384 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:32:13.751210 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:32:13.753346 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:32:13.763337 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 13 20:32:13.764849 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:32:13.786475 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:13.786499 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:13.786509 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:32:13.834223 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:32:13.840577 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:32:13.843218 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:13.847445 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:32:13.856716 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:32:13.863370 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 13 20:32:13.868323 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:32:13.920624 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:32:13.931345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:32:13.943353 systemd-networkd[799]: lo: Link UP Jan 13 20:32:13.943361 systemd-networkd[799]: lo: Gained carrier Jan 13 20:32:13.944183 systemd-networkd[799]: Enumeration completed Jan 13 20:32:13.944504 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:32:13.944550 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 13 20:32:13.948628 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 13 20:32:13.948747 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 13 20:32:13.947925 systemd-networkd[799]: ens192: Link UP Jan 13 20:32:13.947927 systemd-networkd[799]: ens192: Gained carrier Jan 13 20:32:13.948378 systemd[1]: Reached target network.target - Network. Jan 13 20:32:13.960942 ignition[671]: Ignition 2.20.0 Jan 13 20:32:13.961187 ignition[671]: Stage: fetch-offline Jan 13 20:32:13.961339 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:13.961464 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:13.961652 ignition[671]: parsed url from cmdline: "" Jan 13 20:32:13.961682 ignition[671]: no config URL provided Jan 13 20:32:13.961791 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:32:13.961924 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:32:13.962406 ignition[671]: config successfully fetched Jan 13 20:32:13.962423 ignition[671]: parsing config with SHA512: 18820ce99c1def43d174d421c514abab9449b228f2fa0af38cfea748a5ce5e813085bce6db485e78de311edad2bcf5c2814e9a9fddddc48054d7de85048e3fd7 Jan 13 20:32:13.964919 unknown[671]: fetched base config from "system" Jan 13 20:32:13.964930 unknown[671]: fetched user config from "vmware" Jan 13 20:32:13.965265 ignition[671]: fetch-offline: fetch-offline passed Jan 13 20:32:13.965310 ignition[671]: Ignition finished successfully Jan 13 20:32:13.966026 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:32:13.966378 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:32:13.970291 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:32:13.977034 ignition[808]: Ignition 2.20.0 Jan 13 20:32:13.977040 ignition[808]: Stage: kargs Jan 13 20:32:13.977160 ignition[808]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:13.977166 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:13.977808 ignition[808]: kargs: kargs passed Jan 13 20:32:13.977851 ignition[808]: Ignition finished successfully Jan 13 20:32:13.978788 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:32:13.986431 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:32:13.992529 ignition[814]: Ignition 2.20.0 Jan 13 20:32:13.992538 ignition[814]: Stage: disks Jan 13 20:32:13.992627 ignition[814]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:13.992633 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:13.993165 ignition[814]: disks: disks passed Jan 13 20:32:13.993192 ignition[814]: Ignition finished successfully Jan 13 20:32:13.993765 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:32:13.994178 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:32:13.994431 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:32:13.994656 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:32:13.994850 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:32:13.995045 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:32:14.003367 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:32:14.015337 systemd-fsck[823]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:32:14.016337 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:32:14.020286 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:32:14.069209 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:32:14.069651 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:32:14.069960 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:32:14.073240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:32:14.074263 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:32:14.074611 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:32:14.074642 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:32:14.074663 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:32:14.078107 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:32:14.078812 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:32:14.083287 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (831) Jan 13 20:32:14.085468 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:14.085485 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:14.085493 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:32:14.089214 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:32:14.090040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:32:14.107028 initrd-setup-root[855]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:32:14.109115 initrd-setup-root[862]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:32:14.111437 initrd-setup-root[869]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:32:14.113907 initrd-setup-root[876]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:32:14.163104 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:32:14.167266 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:32:14.168716 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:32:14.173257 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:14.184237 ignition[943]: INFO : Ignition 2.20.0 Jan 13 20:32:14.184237 ignition[943]: INFO : Stage: mount Jan 13 20:32:14.184237 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:14.185896 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:14.185896 ignition[943]: INFO : mount: mount passed Jan 13 20:32:14.185896 ignition[943]: INFO : Ignition finished successfully Jan 13 20:32:14.185328 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:32:14.189291 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:32:14.190373 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:32:14.740319 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:32:14.746367 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:32:14.760419 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (955) Jan 13 20:32:14.760448 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:14.760465 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:14.762571 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:32:14.766215 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:32:14.767143 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:32:14.784033 ignition[972]: INFO : Ignition 2.20.0 Jan 13 20:32:14.784033 ignition[972]: INFO : Stage: files Jan 13 20:32:14.784495 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:14.784495 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:14.784843 ignition[972]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:32:14.785360 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:32:14.785360 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:32:14.787426 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:32:14.787555 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:32:14.787676 unknown[972]: wrote ssh authorized keys file for user: core Jan 13 20:32:14.787839 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:32:14.789722 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:32:14.789881 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:32:14.829494 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:32:14.905493 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:32:14.905493 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:32:14.905900 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:32:14.907108 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:32:15.380569 systemd-networkd[799]: ens192: Gained IPv6LL Jan 13 20:32:15.405228 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:32:15.804191 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:32:15.804191 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 13 20:32:15.804764 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 20:32:15.804764 ignition[972]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:32:15.845366 ignition[972]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:32:15.847712 ignition[972]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:32:15.847712 ignition[972]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:32:15.847712 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:32:15.847712 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:32:15.848737 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:32:15.848737 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:32:15.848737 ignition[972]: INFO : files: files passed Jan 13 20:32:15.848737 ignition[972]: INFO : Ignition finished successfully Jan 13 20:32:15.848589 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:32:15.853329 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:32:15.854786 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:32:15.855158 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:32:15.855247 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:32:15.861084 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:32:15.861084 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:32:15.861967 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:32:15.863005 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:32:15.863445 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:32:15.866274 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:32:15.877375 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:32:15.877435 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:32:15.877682 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:32:15.877789 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:32:15.877972 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:32:15.878405 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:32:15.886940 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:32:15.890388 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:32:15.895530 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:32:15.895781 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:32:15.896086 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:32:15.896312 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:32:15.896375 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:32:15.896582 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:32:15.896713 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:32:15.896834 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:32:15.896977 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:32:15.897163 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:32:15.897376 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:32:15.897557 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:32:15.897754 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:32:15.898087 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:32:15.898277 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:32:15.898481 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:32:15.898561 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:32:15.898863 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:32:15.898994 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:32:15.899163 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:32:15.899212 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:32:15.899351 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:32:15.899407 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:32:15.899635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:32:15.899694 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:32:15.899932 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:32:15.900074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:32:15.902334 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:32:15.902517 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:32:15.902709 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:32:15.902883 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:32:15.902977 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:32:15.903179 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:32:15.903235 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:32:15.903495 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:32:15.903574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:32:15.903796 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:32:15.903870 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:32:15.913339 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:32:15.916334 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:32:15.916468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:32:15.916576 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:32:15.916788 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:32:15.916882 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:32:15.921065 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:32:15.921287 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:32:15.923575 ignition[1027]: INFO : Ignition 2.20.0 Jan 13 20:32:15.923575 ignition[1027]: INFO : Stage: umount Jan 13 20:32:15.923853 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:15.923853 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 20:32:15.924997 ignition[1027]: INFO : umount: umount passed Jan 13 20:32:15.924997 ignition[1027]: INFO : Ignition finished successfully Jan 13 20:32:15.925314 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:32:15.925362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:32:15.926437 systemd[1]: Stopped target network.target - Network. Jan 13 20:32:15.926530 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:32:15.926562 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:32:15.926679 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:32:15.926702 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:32:15.926811 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:32:15.926832 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:32:15.926926 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:32:15.926948 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:32:15.927112 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:32:15.927268 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:32:15.929608 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:32:15.931307 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:32:15.931359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:32:15.931551 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:32:15.931572 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:32:15.937431 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:32:15.937528 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:32:15.937555 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:32:15.937678 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 13 20:32:15.937699 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 13 20:32:15.937851 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:32:15.938064 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:32:15.939148 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:32:15.941291 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:32:15.941466 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:32:15.941723 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:32:15.941877 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:32:15.942129 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:32:15.942316 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:32:15.947597 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:32:15.947829 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:32:15.948239 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:32:15.948452 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:32:15.948806 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:32:15.948833 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:32:15.948952 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:32:15.948970 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:32:15.949073 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:32:15.949096 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:32:15.950094 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:32:15.950119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:32:15.950317 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:32:15.950340 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:15.960406 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:32:15.960546 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:32:15.960582 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:32:15.960748 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:32:15.960778 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:32:15.960945 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:32:15.960972 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:32:15.961129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:32:15.961156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:15.964214 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:32:15.964314 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:32:16.009771 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:32:16.009845 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:32:16.010346 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:32:16.010519 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:32:16.010558 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:32:16.015392 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:32:16.026147 systemd[1]: Switching root. Jan 13 20:32:16.049925 systemd-journald[217]: Journal stopped Jan 13 20:32:17.031834 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 13 20:32:17.031858 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:32:17.031866 kernel: SELinux: policy capability open_perms=1 Jan 13 20:32:17.031872 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:32:17.031877 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:32:17.031883 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:32:17.031890 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:32:17.031896 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:32:17.031901 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:32:17.031907 kernel: audit: type=1403 audit(1736800336.612:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:32:17.031914 systemd[1]: Successfully loaded SELinux policy in 30.841ms. Jan 13 20:32:17.031921 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.655ms. Jan 13 20:32:17.031928 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:32:17.031936 systemd[1]: Detected virtualization vmware. Jan 13 20:32:17.031943 systemd[1]: Detected architecture x86-64. Jan 13 20:32:17.031950 systemd[1]: Detected first boot. Jan 13 20:32:17.031957 systemd[1]: Initializing machine ID from random generator. Jan 13 20:32:17.031965 zram_generator::config[1070]: No configuration found. Jan 13 20:32:17.031972 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:32:17.031979 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 20:32:17.031986 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jan 13 20:32:17.031992 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:32:17.031998 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:32:17.032005 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:32:17.032011 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:32:17.032019 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:32:17.032026 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:32:17.032033 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:32:17.032039 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:32:17.032046 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:32:17.032053 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:32:17.032059 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:32:17.032067 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:32:17.032074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:32:17.032081 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:32:17.032087 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:32:17.032095 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:32:17.032101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:32:17.032108 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:32:17.032114 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:32:17.032123 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:32:17.032131 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:32:17.032138 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:32:17.032145 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:32:17.032152 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:32:17.032158 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:32:17.032165 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:32:17.032173 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:32:17.032180 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:32:17.032187 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:32:17.032194 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:32:17.032229 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:32:17.032244 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:32:17.032252 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:32:17.032258 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:32:17.032265 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:32:17.032272 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:32:17.032280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:32:17.032287 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:32:17.032294 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:32:17.032302 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:32:17.032310 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:32:17.032317 systemd[1]: Reached target machines.target - Containers. Jan 13 20:32:17.032324 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:32:17.032331 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jan 13 20:32:17.032338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:32:17.032345 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:32:17.032352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:32:17.032359 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:32:17.032367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:32:17.032374 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:32:17.032382 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:32:17.032389 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:32:17.032396 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:32:17.032403 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:32:17.032410 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:32:17.032416 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:32:17.032425 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:32:17.032432 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:32:17.032439 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:32:17.032446 kernel: fuse: init (API version 7.39) Jan 13 20:32:17.032453 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:32:17.032472 systemd-journald[1150]: Collecting audit messages is disabled. Jan 13 20:32:17.032489 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:32:17.032497 systemd-journald[1150]: Journal started Jan 13 20:32:17.032511 systemd-journald[1150]: Runtime Journal (/run/log/journal/e287f4c60b504381b481995ff000f4e0) is 4.8M, max 38.6M, 33.8M free. Jan 13 20:32:16.911208 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:32:16.923747 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:32:16.923941 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:32:17.033035 jq[1137]: true Jan 13 20:32:17.034234 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:32:17.034250 systemd[1]: Stopped verity-setup.service. Jan 13 20:32:17.036210 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:32:17.041209 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:32:17.041561 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:32:17.041713 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:32:17.041860 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:32:17.042188 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:32:17.042554 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:32:17.042704 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:32:17.049404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:32:17.049639 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:32:17.049712 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:32:17.049940 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:32:17.050008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:32:17.050236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:32:17.050311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:32:17.050526 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:32:17.050596 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:32:17.050813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:32:17.051067 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:32:17.051316 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:32:17.065993 jq[1158]: true Jan 13 20:32:17.070856 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:32:17.075392 kernel: ACPI: bus type drm_connector registered Jan 13 20:32:17.079211 kernel: loop: module loaded Jan 13 20:32:17.081801 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:32:17.083217 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:32:17.084230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:32:17.084252 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:32:17.086715 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:32:17.090320 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:32:17.093292 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:32:17.095746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:32:17.111635 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:32:17.113848 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:32:17.113982 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:32:17.119267 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:32:17.133344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:32:17.135363 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:32:17.141302 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:32:17.142506 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:32:17.142755 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:32:17.142837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:32:17.143099 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:32:17.143172 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:32:17.143394 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:32:17.143540 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:32:17.143755 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:32:17.143984 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:32:17.147503 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:32:17.154375 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:32:17.154512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:32:17.164266 systemd-journald[1150]: Time spent on flushing to /var/log/journal/e287f4c60b504381b481995ff000f4e0 is 19.152ms for 1837 entries. Jan 13 20:32:17.164266 systemd-journald[1150]: System Journal (/var/log/journal/e287f4c60b504381b481995ff000f4e0) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:32:17.192235 systemd-journald[1150]: Received client request to flush runtime journal. Jan 13 20:32:17.192259 kernel: loop0: detected capacity change from 0 to 2960 Jan 13 20:32:17.193110 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:32:17.194364 ignition[1178]: Ignition 2.20.0 Jan 13 20:32:17.194550 ignition[1178]: deleting config from guestinfo properties Jan 13 20:32:17.229720 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:32:17.233966 ignition[1178]: Successfully deleted config Jan 13 20:32:17.237357 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:32:17.237915 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:32:17.238404 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jan 13 20:32:17.242751 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:32:17.246523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:32:17.248280 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 13 20:32:17.248291 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 13 20:32:17.252293 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:32:17.255259 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:32:17.258359 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:32:17.262561 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:32:17.276209 kernel: loop1: detected capacity change from 0 to 141000 Jan 13 20:32:17.287728 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:32:17.294846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:32:17.311785 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 13 20:32:17.311796 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 13 20:32:17.315157 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:32:17.322223 kernel: loop2: detected capacity change from 0 to 138184 Jan 13 20:32:17.394297 kernel: loop3: detected capacity change from 0 to 205544 Jan 13 20:32:17.445217 kernel: loop4: detected capacity change from 0 to 2960 Jan 13 20:32:17.467232 kernel: loop5: detected capacity change from 0 to 141000 Jan 13 20:32:17.492242 kernel: loop6: detected capacity change from 0 to 138184 Jan 13 20:32:17.522269 kernel: loop7: detected capacity change from 0 to 205544 Jan 13 20:32:17.543109 (sd-merge)[1243]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jan 13 20:32:17.543405 (sd-merge)[1243]: Merged extensions into '/usr'. Jan 13 20:32:17.548532 systemd[1]: Reloading requested from client PID 1206 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:32:17.548541 systemd[1]: Reloading... Jan 13 20:32:17.614233 zram_generator::config[1269]: No configuration found. Jan 13 20:32:17.626447 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:32:17.689981 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 20:32:17.706412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:32:17.734953 systemd[1]: Reloading finished in 186 ms. Jan 13 20:32:17.763257 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:32:17.763554 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:32:17.763808 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:32:17.769359 systemd[1]: Starting ensure-sysext.service... Jan 13 20:32:17.770401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:32:17.773282 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:32:17.777797 systemd[1]: Reloading requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:32:17.777805 systemd[1]: Reloading... Jan 13 20:32:17.780971 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:32:17.781286 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:32:17.781768 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:32:17.781933 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jan 13 20:32:17.781966 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jan 13 20:32:17.784242 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:32:17.784292 systemd-tmpfiles[1327]: Skipping /boot Jan 13 20:32:17.789445 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:32:17.789524 systemd-tmpfiles[1327]: Skipping /boot Jan 13 20:32:17.803745 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jan 13 20:32:17.826242 zram_generator::config[1353]: No configuration found. Jan 13 20:32:17.919839 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 20:32:17.938159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:32:17.953228 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1402) Jan 13 20:32:17.965207 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:32:17.972215 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:32:17.985088 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:32:17.985485 systemd[1]: Reloading finished in 207 ms. Jan 13 20:32:17.994877 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:32:18.002354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:32:18.015143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 13 20:32:18.020627 systemd[1]: Finished ensure-sysext.service. Jan 13 20:32:18.021759 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:32:18.026523 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:32:18.033866 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:32:18.035514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:32:18.037300 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:32:18.039408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:32:18.041683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:32:18.042263 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jan 13 20:32:18.045400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:32:18.046597 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:32:18.048295 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:32:18.051288 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:32:18.055280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:32:18.060430 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jan 13 20:32:18.060559 kernel: Guest personality initialized and is active Jan 13 20:32:18.060573 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 13 20:32:18.060588 kernel: Initialized host personality Jan 13 20:32:18.058473 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:32:18.062307 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:32:18.062434 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:32:18.062802 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:32:18.063230 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:32:18.063657 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:32:18.064237 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:32:18.065477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:32:18.066107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:32:18.067394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:32:18.070393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:32:18.070484 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:32:18.070846 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:32:18.083283 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:32:18.094703 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:32:18.095354 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:32:18.095815 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:32:18.098220 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:32:18.101371 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:32:18.127633 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:32:18.134740 (udev-worker)[1400]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jan 13 20:32:18.136562 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:32:18.139015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:18.145234 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:32:18.145260 augenrules[1486]: No rules Jan 13 20:32:18.148346 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:32:18.148650 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:32:18.155212 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:32:18.155954 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:32:18.160409 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:32:18.168408 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:32:18.179972 lvm[1502]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:32:18.207068 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:32:18.207282 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:32:18.214338 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:32:18.217950 lvm[1506]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:32:18.221110 systemd-networkd[1451]: lo: Link UP Jan 13 20:32:18.221114 systemd-networkd[1451]: lo: Gained carrier Jan 13 20:32:18.223577 systemd-networkd[1451]: Enumeration completed Jan 13 20:32:18.223633 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:32:18.223784 systemd-networkd[1451]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jan 13 20:32:18.226294 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 13 20:32:18.226416 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 13 20:32:18.226577 systemd-networkd[1451]: ens192: Link UP Jan 13 20:32:18.226666 systemd-networkd[1451]: ens192: Gained carrier Jan 13 20:32:18.230278 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:32:18.232887 systemd-resolved[1452]: Positive Trust Anchors: Jan 13 20:32:18.233407 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:32:18.233461 systemd-resolved[1452]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:32:18.233662 systemd-resolved[1452]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:32:18.236476 systemd-resolved[1452]: Defaulting to hostname 'linux'. Jan 13 20:32:18.242094 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:32:18.242462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:18.242916 systemd[1]: Reached target network.target - Network. Jan 13 20:32:18.243090 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:32:18.243253 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:32:18.243438 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:32:18.243597 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:32:18.243744 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:32:18.243879 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:32:18.243931 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:32:18.244051 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:32:18.244278 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:32:18.244466 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:32:18.244589 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:32:18.245419 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:32:18.246458 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:32:18.250615 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:32:18.251329 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:32:18.251606 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:32:18.252067 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:32:18.252218 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:32:18.252390 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:32:18.252422 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:32:18.253317 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:32:18.255299 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:32:18.256992 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:32:18.259040 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:32:18.259314 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:32:18.260297 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:32:18.265252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:32:18.266155 jq[1517]: false Jan 13 20:32:18.267360 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:32:18.268305 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:32:18.272443 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:32:18.273443 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:32:18.273837 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:32:18.274296 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:32:18.276904 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:32:18.277408 extend-filesystems[1518]: Found loop4 Jan 13 20:32:18.278291 extend-filesystems[1518]: Found loop5 Jan 13 20:32:18.278666 extend-filesystems[1518]: Found loop6 Jan 13 20:32:18.278666 extend-filesystems[1518]: Found loop7 Jan 13 20:32:18.278666 extend-filesystems[1518]: Found sda Jan 13 20:32:18.278666 extend-filesystems[1518]: Found sda1 Jan 13 20:32:18.278666 extend-filesystems[1518]: Found sda2 Jan 13 20:32:18.278666 extend-filesystems[1518]: Found sda3 Jan 13 20:32:18.278666 extend-filesystems[1518]: Found usr Jan 13 20:32:18.278666 extend-filesystems[1518]: Found sda4 Jan 13 20:32:18.278666 extend-filesystems[1518]: Found sda6 Jan 13 20:32:18.278666 extend-filesystems[1518]: Found sda7 Jan 13 20:32:18.278666 extend-filesystems[1518]: Found sda9 Jan 13 20:32:18.278666 extend-filesystems[1518]: Checking size of /dev/sda9 Jan 13 20:32:18.279600 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jan 13 20:32:18.285455 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:32:18.285564 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:32:18.294435 extend-filesystems[1518]: Old size kept for /dev/sda9 Jan 13 20:32:18.294731 extend-filesystems[1518]: Found sr0 Jan 13 20:32:18.295185 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:32:18.295507 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:32:18.296403 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:32:18.296495 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:32:18.299019 jq[1526]: true Jan 13 20:32:18.318291 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jan 13 20:32:18.318561 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:32:18.319234 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:32:18.322265 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jan 13 20:32:18.324208 tar[1532]: linux-amd64/helm Jan 13 20:32:18.324193 dbus-daemon[1516]: [system] SELinux support is enabled Jan 13 20:32:18.324303 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:32:18.325829 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:32:18.325844 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:32:18.325958 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:32:18.325969 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:33:31.068885 systemd-timesyncd[1453]: Contacted time server 152.70.159.102:123 (0.flatcar.pool.ntp.org). Jan 13 20:33:31.068910 systemd-timesyncd[1453]: Initial clock synchronization to Mon 2025-01-13 20:33:31.068835 UTC. Jan 13 20:33:31.068937 systemd-resolved[1452]: Clock change detected. Flushing caches. Jan 13 20:33:31.069612 update_engine[1525]: I20250113 20:33:31.069568 1525 main.cc:92] Flatcar Update Engine starting Jan 13 20:33:31.070743 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:33:31.075368 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:33:31.075514 update_engine[1525]: I20250113 20:33:31.075263 1525 update_check_scheduler.cc:74] Next update check in 3m50s Jan 13 20:33:31.076971 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:33:31.080655 jq[1549]: true Jan 13 20:33:31.100499 unknown[1555]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jan 13 20:33:31.102840 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jan 13 20:33:31.105964 unknown[1555]: Core dump limit set to -1 Jan 13 20:33:31.112041 kernel: NET: Registered PF_VSOCK protocol family Jan 13 20:33:31.148776 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1401) Jan 13 20:33:31.151347 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:33:31.152088 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:33:31.152502 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:33:31.156228 systemd-logind[1524]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:33:31.156245 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:33:31.158274 systemd-logind[1524]: New seat seat0. Jan 13 20:33:31.158915 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:33:31.275237 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:33:31.359193 containerd[1550]: time="2025-01-13T20:33:31.359149601Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:33:31.402906 containerd[1550]: time="2025-01-13T20:33:31.402849712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.403851998Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.403870560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.403879605Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.403962244Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.403971385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.404004732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.404011950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.404097417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.404105696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.404112592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404402 containerd[1550]: time="2025-01-13T20:33:31.404117455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404556 containerd[1550]: time="2025-01-13T20:33:31.404157403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404556 containerd[1550]: time="2025-01-13T20:33:31.404261683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404556 containerd[1550]: time="2025-01-13T20:33:31.404312217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:31.404556 containerd[1550]: time="2025-01-13T20:33:31.404319661Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:33:31.404556 containerd[1550]: time="2025-01-13T20:33:31.404360689Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:33:31.404556 containerd[1550]: time="2025-01-13T20:33:31.404385737Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:33:31.405877 containerd[1550]: time="2025-01-13T20:33:31.405866842Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:33:31.405934 containerd[1550]: time="2025-01-13T20:33:31.405926507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:33:31.406008 containerd[1550]: time="2025-01-13T20:33:31.406000998Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:33:31.406043 containerd[1550]: time="2025-01-13T20:33:31.406037524Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:33:31.406103 containerd[1550]: time="2025-01-13T20:33:31.406095830Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:33:31.406214 containerd[1550]: time="2025-01-13T20:33:31.406206167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:33:31.407754 containerd[1550]: time="2025-01-13T20:33:31.407738887Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:33:31.407826 containerd[1550]: time="2025-01-13T20:33:31.407814718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:33:31.407844 containerd[1550]: time="2025-01-13T20:33:31.407826445Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:33:31.407844 containerd[1550]: time="2025-01-13T20:33:31.407834619Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:33:31.407844 containerd[1550]: time="2025-01-13T20:33:31.407842921Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:33:31.407890 containerd[1550]: time="2025-01-13T20:33:31.407849890Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:33:31.407890 containerd[1550]: time="2025-01-13T20:33:31.407856833Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:33:31.407890 containerd[1550]: time="2025-01-13T20:33:31.407864197Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:33:31.407890 containerd[1550]: time="2025-01-13T20:33:31.407871219Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:33:31.407890 containerd[1550]: time="2025-01-13T20:33:31.407881116Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:33:31.407890 containerd[1550]: time="2025-01-13T20:33:31.407887943Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407894018Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407905354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407912672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407919242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407925948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407935135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407942354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407948521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407955077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.407966 containerd[1550]: time="2025-01-13T20:33:31.407962008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.408091 containerd[1550]: time="2025-01-13T20:33:31.407969581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.408091 containerd[1550]: time="2025-01-13T20:33:31.407975611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.408091 containerd[1550]: time="2025-01-13T20:33:31.407981537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.408091 containerd[1550]: time="2025-01-13T20:33:31.407987861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.408091 containerd[1550]: time="2025-01-13T20:33:31.407994992Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:33:31.408091 containerd[1550]: time="2025-01-13T20:33:31.408008108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.408091 containerd[1550]: time="2025-01-13T20:33:31.408015507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.408091 containerd[1550]: time="2025-01-13T20:33:31.408021133Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:33:31.408354 containerd[1550]: time="2025-01-13T20:33:31.408346291Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:33:31.408373 containerd[1550]: time="2025-01-13T20:33:31.408358898Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:33:31.408373 containerd[1550]: time="2025-01-13T20:33:31.408365335Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:33:31.408399 containerd[1550]: time="2025-01-13T20:33:31.408371837Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:33:31.408399 containerd[1550]: time="2025-01-13T20:33:31.408376865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.408399 containerd[1550]: time="2025-01-13T20:33:31.408383612Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:33:31.408399 containerd[1550]: time="2025-01-13T20:33:31.408391305Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:33:31.408399 containerd[1550]: time="2025-01-13T20:33:31.408396682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:33:31.408611 containerd[1550]: time="2025-01-13T20:33:31.408564744Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:33:31.408685 containerd[1550]: time="2025-01-13T20:33:31.408610360Z" level=info msg="Connect containerd service" Jan 13 20:33:31.408685 containerd[1550]: time="2025-01-13T20:33:31.408628041Z" level=info msg="using legacy CRI server" Jan 13 20:33:31.408685 containerd[1550]: time="2025-01-13T20:33:31.408632156Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:33:31.408685 containerd[1550]: time="2025-01-13T20:33:31.408684211Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:33:31.408999 containerd[1550]: time="2025-01-13T20:33:31.408984909Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:33:31.409367 containerd[1550]: time="2025-01-13T20:33:31.409110779Z" level=info msg="Start subscribing containerd event" Jan 13 20:33:31.409367 containerd[1550]: time="2025-01-13T20:33:31.409127626Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:33:31.409367 containerd[1550]: time="2025-01-13T20:33:31.409136246Z" level=info msg="Start recovering state" Jan 13 20:33:31.409367 containerd[1550]: time="2025-01-13T20:33:31.409151922Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:33:31.409367 containerd[1550]: time="2025-01-13T20:33:31.409170514Z" level=info msg="Start event monitor" Jan 13 20:33:31.409367 containerd[1550]: time="2025-01-13T20:33:31.409180979Z" level=info msg="Start snapshots syncer" Jan 13 20:33:31.409367 containerd[1550]: time="2025-01-13T20:33:31.409186548Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:33:31.409367 containerd[1550]: time="2025-01-13T20:33:31.409191024Z" level=info msg="Start streaming server" Jan 13 20:33:31.409266 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:33:31.410025 containerd[1550]: time="2025-01-13T20:33:31.409956211Z" level=info msg="containerd successfully booted in 0.055215s" Jan 13 20:33:31.458366 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:33:31.473723 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:33:31.482883 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:33:31.484814 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:33:31.484923 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:33:31.488803 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:33:31.494597 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:33:31.502062 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:33:31.505076 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:33:31.505251 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:33:31.542713 tar[1532]: linux-amd64/LICENSE Jan 13 20:33:31.542793 tar[1532]: linux-amd64/README.md Jan 13 20:33:31.550264 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:33:32.795085 systemd-networkd[1451]: ens192: Gained IPv6LL Jan 13 20:33:32.796677 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:33:32.797573 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:33:32.803002 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jan 13 20:33:32.804600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:32.805917 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:33:32.823456 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:33:32.834691 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:33:32.834822 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jan 13 20:33:32.835406 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:33:33.560240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:33.560723 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:33:33.562861 systemd[1]: Startup finished in 941ms (kernel) + 4.990s (initrd) + 4.237s (userspace) = 10.169s. Jan 13 20:33:33.565751 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:33.571713 agetty[1659]: failed to open credentials directory Jan 13 20:33:33.571858 agetty[1660]: failed to open credentials directory Jan 13 20:33:33.591823 login[1660]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:33:33.591981 login[1659]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:33:33.598937 systemd-logind[1524]: New session 2 of user core. Jan 13 20:33:33.599237 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:33:33.605884 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:33:33.608247 systemd-logind[1524]: New session 1 of user core. Jan 13 20:33:33.614599 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:33:33.620903 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:33:33.622446 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:33:33.682295 systemd[1701]: Queued start job for default target default.target. Jan 13 20:33:33.687883 systemd[1701]: Created slice app.slice - User Application Slice. Jan 13 20:33:33.687900 systemd[1701]: Reached target paths.target - Paths. Jan 13 20:33:33.687909 systemd[1701]: Reached target timers.target - Timers. Jan 13 20:33:33.688584 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:33:33.695447 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:33:33.695624 systemd[1701]: Reached target sockets.target - Sockets. Jan 13 20:33:33.695682 systemd[1701]: Reached target basic.target - Basic System. Jan 13 20:33:33.695744 systemd[1701]: Reached target default.target - Main User Target. Jan 13 20:33:33.695778 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:33:33.695854 systemd[1701]: Startup finished in 70ms. Jan 13 20:33:33.700836 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:33:33.701383 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:33:34.028751 kubelet[1694]: E0113 20:33:34.028683 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:34.030147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:34.030227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:44.280593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:33:44.288010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:44.342854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:44.345245 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:44.399130 kubelet[1745]: E0113 20:33:44.399091 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:44.401612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:44.401705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:54.652036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:33:54.661893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:54.899574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:54.902085 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:54.923196 kubelet[1760]: E0113 20:33:54.923124 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:54.924238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:54.924319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:34:05.034163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:34:05.043890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:05.165326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:05.168516 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:34:05.205063 kubelet[1775]: E0113 20:34:05.205029 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:34:05.206078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:34:05.206156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:34:11.217279 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:34:11.218290 systemd[1]: Started sshd@0-139.178.70.110:22-147.75.109.163:47034.service - OpenSSH per-connection server daemon (147.75.109.163:47034). Jan 13 20:34:11.256169 sshd[1783]: Accepted publickey for core from 147.75.109.163 port 47034 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:34:11.256912 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:11.259354 systemd-logind[1524]: New session 3 of user core. Jan 13 20:34:11.267858 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:34:11.321826 systemd[1]: Started sshd@1-139.178.70.110:22-147.75.109.163:47050.service - OpenSSH per-connection server daemon (147.75.109.163:47050). Jan 13 20:34:11.354829 sshd[1788]: Accepted publickey for core from 147.75.109.163 port 47050 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:34:11.356180 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:11.359370 systemd-logind[1524]: New session 4 of user core. Jan 13 20:34:11.364859 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:34:11.412808 sshd[1790]: Connection closed by 147.75.109.163 port 47050 Jan 13 20:34:11.413581 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:11.417071 systemd[1]: sshd@1-139.178.70.110:22-147.75.109.163:47050.service: Deactivated successfully. Jan 13 20:34:11.417754 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:34:11.418112 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:34:11.418969 systemd[1]: Started sshd@2-139.178.70.110:22-147.75.109.163:47060.service - OpenSSH per-connection server daemon (147.75.109.163:47060). Jan 13 20:34:11.420893 systemd-logind[1524]: Removed session 4. Jan 13 20:34:11.449304 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 47060 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:34:11.449952 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:11.452163 systemd-logind[1524]: New session 5 of user core. Jan 13 20:34:11.458832 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:34:11.504535 sshd[1797]: Connection closed by 147.75.109.163 port 47060 Jan 13 20:34:11.504846 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:11.517359 systemd[1]: sshd@2-139.178.70.110:22-147.75.109.163:47060.service: Deactivated successfully. Jan 13 20:34:11.518238 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:34:11.518643 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:34:11.520950 systemd[1]: Started sshd@3-139.178.70.110:22-147.75.109.163:47070.service - OpenSSH per-connection server daemon (147.75.109.163:47070). Jan 13 20:34:11.521931 systemd-logind[1524]: Removed session 5. Jan 13 20:34:11.551748 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 47070 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:34:11.552400 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:11.555701 systemd-logind[1524]: New session 6 of user core. Jan 13 20:34:11.557836 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:34:11.604490 sshd[1804]: Connection closed by 147.75.109.163 port 47070 Jan 13 20:34:11.604781 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:11.613798 systemd[1]: sshd@3-139.178.70.110:22-147.75.109.163:47070.service: Deactivated successfully. Jan 13 20:34:11.614637 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:34:11.615520 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:34:11.619990 systemd[1]: Started sshd@4-139.178.70.110:22-147.75.109.163:47080.service - OpenSSH per-connection server daemon (147.75.109.163:47080). Jan 13 20:34:11.620735 systemd-logind[1524]: Removed session 6. Jan 13 20:34:11.649079 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 47080 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:34:11.649903 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:11.653366 systemd-logind[1524]: New session 7 of user core. Jan 13 20:34:11.659841 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:34:11.720113 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:34:11.720273 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:34:11.729079 sudo[1812]: pam_unix(sudo:session): session closed for user root Jan 13 20:34:11.730478 sshd[1811]: Connection closed by 147.75.109.163 port 47080 Jan 13 20:34:11.730066 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:11.744548 systemd[1]: sshd@4-139.178.70.110:22-147.75.109.163:47080.service: Deactivated successfully. Jan 13 20:34:11.745492 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:34:11.746346 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:34:11.750058 systemd[1]: Started sshd@5-139.178.70.110:22-147.75.109.163:47086.service - OpenSSH per-connection server daemon (147.75.109.163:47086). Jan 13 20:34:11.752885 systemd-logind[1524]: Removed session 7. Jan 13 20:34:11.778151 sshd[1817]: Accepted publickey for core from 147.75.109.163 port 47086 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:34:11.778883 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:11.781359 systemd-logind[1524]: New session 8 of user core. Jan 13 20:34:11.788903 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:34:11.836205 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:34:11.836521 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:34:11.838470 sudo[1821]: pam_unix(sudo:session): session closed for user root Jan 13 20:34:11.841159 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:34:11.841319 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:34:11.854064 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:34:11.867605 augenrules[1843]: No rules Jan 13 20:34:11.868163 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:34:11.868366 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:34:11.869059 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 13 20:34:11.869691 sshd[1819]: Connection closed by 147.75.109.163 port 47086 Jan 13 20:34:11.870444 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:11.874087 systemd[1]: sshd@5-139.178.70.110:22-147.75.109.163:47086.service: Deactivated successfully. Jan 13 20:34:11.874797 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:34:11.875561 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:34:11.876200 systemd[1]: Started sshd@6-139.178.70.110:22-147.75.109.163:47102.service - OpenSSH per-connection server daemon (147.75.109.163:47102). Jan 13 20:34:11.878924 systemd-logind[1524]: Removed session 8. Jan 13 20:34:11.906040 sshd[1851]: Accepted publickey for core from 147.75.109.163 port 47102 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:34:11.906844 sshd-session[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:11.909429 systemd-logind[1524]: New session 9 of user core. Jan 13 20:34:11.920893 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:34:11.968246 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:34:11.968400 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:34:12.363907 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:34:12.364011 (dockerd)[1871]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:34:12.731069 dockerd[1871]: time="2025-01-13T20:34:12.730580204Z" level=info msg="Starting up" Jan 13 20:34:12.837857 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1019141039-merged.mount: Deactivated successfully. Jan 13 20:34:12.858169 dockerd[1871]: time="2025-01-13T20:34:12.857991889Z" level=info msg="Loading containers: start." Jan 13 20:34:13.060781 kernel: Initializing XFRM netlink socket Jan 13 20:34:13.122435 systemd-networkd[1451]: docker0: Link UP Jan 13 20:34:13.160639 dockerd[1871]: time="2025-01-13T20:34:13.160610661Z" level=info msg="Loading containers: done." Jan 13 20:34:13.173940 dockerd[1871]: time="2025-01-13T20:34:13.173909578Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:34:13.174057 dockerd[1871]: time="2025-01-13T20:34:13.173979701Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:34:13.174057 dockerd[1871]: time="2025-01-13T20:34:13.174047883Z" level=info msg="Daemon has completed initialization" Jan 13 20:34:13.190391 dockerd[1871]: time="2025-01-13T20:34:13.190352057Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:34:13.190669 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:34:14.152928 containerd[1550]: time="2025-01-13T20:34:14.152902123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:34:14.849868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569142638.mount: Deactivated successfully. Jan 13 20:34:15.283995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:34:15.292960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:15.353811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:15.356222 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:34:15.440473 kubelet[2117]: E0113 20:34:15.440414 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:34:15.441656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:34:15.441752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:34:16.163719 containerd[1550]: time="2025-01-13T20:34:16.163677235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:16.164334 containerd[1550]: time="2025-01-13T20:34:16.164305086Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Jan 13 20:34:16.164585 containerd[1550]: time="2025-01-13T20:34:16.164567695Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:16.166521 containerd[1550]: time="2025-01-13T20:34:16.166502588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:16.167153 containerd[1550]: time="2025-01-13T20:34:16.167135298Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.014209577s" Jan 13 20:34:16.167190 containerd[1550]: time="2025-01-13T20:34:16.167154683Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 20:34:16.168350 containerd[1550]: time="2025-01-13T20:34:16.168333859Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:34:16.398727 update_engine[1525]: I20250113 20:34:16.398300 1525 update_attempter.cc:509] Updating boot flags... Jan 13 20:34:16.428185 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2137) Jan 13 20:34:16.461771 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2136) Jan 13 20:34:16.523113 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2136) Jan 13 20:34:17.627895 containerd[1550]: time="2025-01-13T20:34:17.627857079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:17.628986 containerd[1550]: time="2025-01-13T20:34:17.628938382Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Jan 13 20:34:17.629236 containerd[1550]: time="2025-01-13T20:34:17.629211857Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:17.632039 containerd[1550]: time="2025-01-13T20:34:17.632012599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:17.632837 containerd[1550]: time="2025-01-13T20:34:17.632514462Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.464161922s" Jan 13 20:34:17.632837 containerd[1550]: time="2025-01-13T20:34:17.632535425Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 20:34:17.633140 containerd[1550]: time="2025-01-13T20:34:17.633124489Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:34:18.881463 containerd[1550]: time="2025-01-13T20:34:18.881392817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:18.892827 containerd[1550]: time="2025-01-13T20:34:18.892797036Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Jan 13 20:34:18.895831 containerd[1550]: time="2025-01-13T20:34:18.895798111Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:18.903781 containerd[1550]: time="2025-01-13T20:34:18.903739152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:18.904522 containerd[1550]: time="2025-01-13T20:34:18.904233425Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.271090557s" Jan 13 20:34:18.904522 containerd[1550]: time="2025-01-13T20:34:18.904255091Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 20:34:18.904969 containerd[1550]: time="2025-01-13T20:34:18.904918377Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:34:19.961222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394474027.mount: Deactivated successfully. Jan 13 20:34:20.243825 containerd[1550]: time="2025-01-13T20:34:20.243682691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:20.244239 containerd[1550]: time="2025-01-13T20:34:20.244215113Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 20:34:20.244807 containerd[1550]: time="2025-01-13T20:34:20.244473193Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:20.245389 containerd[1550]: time="2025-01-13T20:34:20.245353654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:20.245858 containerd[1550]: time="2025-01-13T20:34:20.245749081Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.340727261s" Jan 13 20:34:20.245858 containerd[1550]: time="2025-01-13T20:34:20.245793994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 20:34:20.246146 containerd[1550]: time="2025-01-13T20:34:20.246112432Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:34:20.766138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056914331.mount: Deactivated successfully. Jan 13 20:34:21.681952 containerd[1550]: time="2025-01-13T20:34:21.681926526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:21.682561 containerd[1550]: time="2025-01-13T20:34:21.682339194Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:34:21.683134 containerd[1550]: time="2025-01-13T20:34:21.682911562Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:21.684636 containerd[1550]: time="2025-01-13T20:34:21.684613696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:21.685421 containerd[1550]: time="2025-01-13T20:34:21.685249582Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.439101224s" Jan 13 20:34:21.685421 containerd[1550]: time="2025-01-13T20:34:21.685267906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:34:21.685895 containerd[1550]: time="2025-01-13T20:34:21.685644339Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:34:22.120817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3959524402.mount: Deactivated successfully. Jan 13 20:34:22.122266 containerd[1550]: time="2025-01-13T20:34:22.122244131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:22.122982 containerd[1550]: time="2025-01-13T20:34:22.122956458Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 13 20:34:22.123320 containerd[1550]: time="2025-01-13T20:34:22.123305613Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:22.124697 containerd[1550]: time="2025-01-13T20:34:22.124682533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:22.125316 containerd[1550]: time="2025-01-13T20:34:22.125053057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 439.159424ms" Jan 13 20:34:22.125316 containerd[1550]: time="2025-01-13T20:34:22.125068384Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 20:34:22.125598 containerd[1550]: time="2025-01-13T20:34:22.125584008Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:34:22.642284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214979148.mount: Deactivated successfully. Jan 13 20:34:24.025961 containerd[1550]: time="2025-01-13T20:34:24.025922801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:24.026867 containerd[1550]: time="2025-01-13T20:34:24.026845608Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 13 20:34:24.027338 containerd[1550]: time="2025-01-13T20:34:24.027321366Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:24.028858 containerd[1550]: time="2025-01-13T20:34:24.028830913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:24.029603 containerd[1550]: time="2025-01-13T20:34:24.029586075Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.903985756s" Jan 13 20:34:24.029632 containerd[1550]: time="2025-01-13T20:34:24.029606121Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 20:34:25.535156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 20:34:25.548818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:25.814018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:25.814624 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:34:25.879772 kubelet[2296]: E0113 20:34:25.878937 2296 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:34:25.880377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:34:25.880461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:34:26.081744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:26.091877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:26.110244 systemd[1]: Reloading requested from client PID 2310 ('systemctl') (unit session-9.scope)... Jan 13 20:34:26.110325 systemd[1]: Reloading... Jan 13 20:34:26.174779 zram_generator::config[2348]: No configuration found. Jan 13 20:34:26.232133 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 20:34:26.247874 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:34:26.291511 systemd[1]: Reloading finished in 180 ms. Jan 13 20:34:26.320591 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:34:26.320655 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:34:26.320852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:26.324973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:26.614093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:26.616940 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:34:26.637716 kubelet[2415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:26.637931 kubelet[2415]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:34:26.637959 kubelet[2415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:26.643528 kubelet[2415]: I0113 20:34:26.643504 2415 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:34:26.965264 kubelet[2415]: I0113 20:34:26.965202 2415 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:34:26.965264 kubelet[2415]: I0113 20:34:26.965225 2415 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:34:26.965626 kubelet[2415]: I0113 20:34:26.965565 2415 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:34:26.986525 kubelet[2415]: I0113 20:34:26.986358 2415 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:34:26.988756 kubelet[2415]: E0113 20:34:26.988740 2415 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:26.994517 kubelet[2415]: E0113 20:34:26.994502 2415 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:34:26.994557 kubelet[2415]: I0113 20:34:26.994548 2415 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:34:26.997332 kubelet[2415]: I0113 20:34:26.997320 2415 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:34:26.997390 kubelet[2415]: I0113 20:34:26.997379 2415 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:34:26.997471 kubelet[2415]: I0113 20:34:26.997454 2415 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:34:26.997568 kubelet[2415]: I0113 20:34:26.997472 2415 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:34:26.997635 kubelet[2415]: I0113 20:34:26.997571 2415 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:34:26.997635 kubelet[2415]: I0113 20:34:26.997577 2415 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:34:26.997635 kubelet[2415]: I0113 20:34:26.997632 2415 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:26.999205 kubelet[2415]: I0113 20:34:26.999097 2415 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:34:26.999205 kubelet[2415]: I0113 20:34:26.999143 2415 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:34:26.999205 kubelet[2415]: I0113 20:34:26.999167 2415 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:34:26.999205 kubelet[2415]: I0113 20:34:26.999177 2415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:34:27.007947 kubelet[2415]: W0113 20:34:27.007780 2415 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Jan 13 20:34:27.007947 kubelet[2415]: E0113 20:34:27.007809 2415 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:27.009378 kubelet[2415]: I0113 20:34:27.009274 2415 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:34:27.010256 kubelet[2415]: W0113 20:34:27.010205 2415 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Jan 13 20:34:27.010256 kubelet[2415]: E0113 20:34:27.010228 2415 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:27.011047 kubelet[2415]: I0113 20:34:27.010488 2415 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:34:27.011047 kubelet[2415]: W0113 20:34:27.010922 2415 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:34:27.011536 kubelet[2415]: I0113 20:34:27.011508 2415 server.go:1269] "Started kubelet" Jan 13 20:34:27.012354 kubelet[2415]: I0113 20:34:27.012074 2415 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:34:27.012683 kubelet[2415]: I0113 20:34:27.012648 2415 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:34:27.014004 kubelet[2415]: I0113 20:34:27.013928 2415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:34:27.015082 kubelet[2415]: I0113 20:34:27.015025 2415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:34:27.015203 kubelet[2415]: I0113 20:34:27.015132 2415 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:34:27.018007 kubelet[2415]: E0113 20:34:27.016114 2415 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.110:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5ad2a1f3ebb0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:34:27.01149688 +0000 UTC m=+0.392563944,LastTimestamp:2025-01-13 20:34:27.01149688 +0000 UTC m=+0.392563944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:34:27.018772 kubelet[2415]: I0113 20:34:27.018633 2415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:34:27.018772 kubelet[2415]: E0113 20:34:27.018721 2415 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:27.018772 kubelet[2415]: I0113 20:34:27.018753 2415 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:34:27.018877 kubelet[2415]: I0113 20:34:27.018867 2415 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:34:27.019741 kubelet[2415]: I0113 20:34:27.019729 2415 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:34:27.020312 kubelet[2415]: W0113 20:34:27.019930 2415 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Jan 13 20:34:27.020312 kubelet[2415]: E0113 20:34:27.019952 2415 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:27.021846 kubelet[2415]: E0113 20:34:27.021823 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="200ms" Jan 13 20:34:27.022598 kubelet[2415]: E0113 20:34:27.022588 2415 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:34:27.023007 kubelet[2415]: I0113 20:34:27.022998 2415 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:34:27.023049 kubelet[2415]: I0113 20:34:27.023044 2415 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:34:27.023115 kubelet[2415]: I0113 20:34:27.023106 2415 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:34:27.028165 kubelet[2415]: I0113 20:34:27.028048 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:34:27.028642 kubelet[2415]: I0113 20:34:27.028629 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:34:27.028676 kubelet[2415]: I0113 20:34:27.028643 2415 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:34:27.028676 kubelet[2415]: I0113 20:34:27.028653 2415 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:34:27.028709 kubelet[2415]: E0113 20:34:27.028671 2415 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:34:27.033860 kubelet[2415]: W0113 20:34:27.033834 2415 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Jan 13 20:34:27.033895 kubelet[2415]: E0113 20:34:27.033865 2415 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:27.047522 kubelet[2415]: I0113 20:34:27.047174 2415 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:34:27.047522 kubelet[2415]: I0113 20:34:27.047183 2415 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:34:27.047522 kubelet[2415]: I0113 20:34:27.047191 2415 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:27.049716 kubelet[2415]: I0113 20:34:27.049676 2415 policy_none.go:49] "None policy: Start" Jan 13 20:34:27.050250 kubelet[2415]: I0113 20:34:27.050033 2415 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:34:27.050250 kubelet[2415]: I0113 20:34:27.050047 2415 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:34:27.054290 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:34:27.060375 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:34:27.062732 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:34:27.066293 kubelet[2415]: I0113 20:34:27.066177 2415 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:34:27.066293 kubelet[2415]: I0113 20:34:27.066262 2415 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:34:27.066293 kubelet[2415]: I0113 20:34:27.066268 2415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:34:27.067158 kubelet[2415]: I0113 20:34:27.067106 2415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:34:27.067697 kubelet[2415]: E0113 20:34:27.067669 2415 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:34:27.133631 systemd[1]: Created slice kubepods-burstable-pod621097dad26e8d622aa6fb5eebcd6bc2.slice - libcontainer container kubepods-burstable-pod621097dad26e8d622aa6fb5eebcd6bc2.slice. Jan 13 20:34:27.151618 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 20:34:27.154580 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 20:34:27.167182 kubelet[2415]: I0113 20:34:27.167130 2415 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:34:27.167352 kubelet[2415]: E0113 20:34:27.167309 2415 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Jan 13 20:34:27.220786 kubelet[2415]: I0113 20:34:27.220631 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:27.220786 kubelet[2415]: I0113 20:34:27.220645 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:27.220786 kubelet[2415]: I0113 20:34:27.220656 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/621097dad26e8d622aa6fb5eebcd6bc2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"621097dad26e8d622aa6fb5eebcd6bc2\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:27.220786 kubelet[2415]: I0113 20:34:27.220664 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:27.220786 kubelet[2415]: I0113 20:34:27.220672 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:27.220895 kubelet[2415]: I0113 20:34:27.220688 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:27.220895 kubelet[2415]: I0113 20:34:27.220696 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:34:27.220895 kubelet[2415]: I0113 20:34:27.220704 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/621097dad26e8d622aa6fb5eebcd6bc2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"621097dad26e8d622aa6fb5eebcd6bc2\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:27.220895 kubelet[2415]: I0113 20:34:27.220711 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/621097dad26e8d622aa6fb5eebcd6bc2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"621097dad26e8d622aa6fb5eebcd6bc2\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:27.222161 kubelet[2415]: E0113 20:34:27.222142 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="400ms" Jan 13 20:34:27.368033 kubelet[2415]: I0113 20:34:27.367990 2415 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:34:27.368158 kubelet[2415]: E0113 20:34:27.368143 2415 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Jan 13 20:34:27.451046 containerd[1550]: time="2025-01-13T20:34:27.451005356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:621097dad26e8d622aa6fb5eebcd6bc2,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:27.453395 containerd[1550]: time="2025-01-13T20:34:27.453286777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:27.455722 containerd[1550]: time="2025-01-13T20:34:27.455693837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:27.622897 kubelet[2415]: E0113 20:34:27.622861 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="800ms" Jan 13 20:34:27.768984 kubelet[2415]: I0113 20:34:27.768954 2415 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:34:27.769633 kubelet[2415]: E0113 20:34:27.769345 2415 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Jan 13 20:34:27.895372 kubelet[2415]: W0113 20:34:27.895273 2415 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Jan 13 20:34:27.895372 kubelet[2415]: E0113 20:34:27.895321 2415 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:27.897692 kubelet[2415]: W0113 20:34:27.897636 2415 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Jan 13 20:34:27.897692 kubelet[2415]: E0113 20:34:27.897669 2415 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:27.922113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631169804.mount: Deactivated successfully. Jan 13 20:34:27.982798 kubelet[2415]: W0113 20:34:27.982708 2415 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Jan 13 20:34:27.982798 kubelet[2415]: E0113 20:34:27.982774 2415 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:28.104186 containerd[1550]: time="2025-01-13T20:34:28.103564545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:28.115213 containerd[1550]: time="2025-01-13T20:34:28.115177669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:34:28.145319 containerd[1550]: time="2025-01-13T20:34:28.145301402Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:28.150982 containerd[1550]: time="2025-01-13T20:34:28.150934536Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:28.155554 containerd[1550]: time="2025-01-13T20:34:28.155531745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:34:28.169528 containerd[1550]: time="2025-01-13T20:34:28.169503394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:28.170214 containerd[1550]: time="2025-01-13T20:34:28.170191287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 716.918915ms" Jan 13 20:34:28.175787 containerd[1550]: time="2025-01-13T20:34:28.175087271Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:28.185541 containerd[1550]: time="2025-01-13T20:34:28.185469508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:34:28.195014 containerd[1550]: time="2025-01-13T20:34:28.194799979Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 741.461208ms" Jan 13 20:34:28.195262 containerd[1550]: time="2025-01-13T20:34:28.195243020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 739.515899ms" Jan 13 20:34:28.424852 kubelet[2415]: E0113 20:34:28.424783 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="1.6s" Jan 13 20:34:28.570659 kubelet[2415]: W0113 20:34:28.570596 2415 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Jan 13 20:34:28.570659 kubelet[2415]: E0113 20:34:28.570637 2415 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:28.571375 kubelet[2415]: I0113 20:34:28.571244 2415 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:34:28.571959 kubelet[2415]: E0113 20:34:28.571948 2415 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Jan 13 20:34:28.583053 containerd[1550]: time="2025-01-13T20:34:28.582925141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:28.583271 containerd[1550]: time="2025-01-13T20:34:28.583005528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:28.583271 containerd[1550]: time="2025-01-13T20:34:28.583024016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:28.584752 containerd[1550]: time="2025-01-13T20:34:28.583646983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:28.584958 containerd[1550]: time="2025-01-13T20:34:28.582832603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:28.584958 containerd[1550]: time="2025-01-13T20:34:28.584887394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:28.584958 containerd[1550]: time="2025-01-13T20:34:28.584895380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:28.587602 containerd[1550]: time="2025-01-13T20:34:28.585124589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:28.588250 containerd[1550]: time="2025-01-13T20:34:28.588128001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:28.588250 containerd[1550]: time="2025-01-13T20:34:28.588161281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:28.588250 containerd[1550]: time="2025-01-13T20:34:28.588179810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:28.588250 containerd[1550]: time="2025-01-13T20:34:28.588217989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:28.602869 systemd[1]: Started cri-containerd-6a3b075c0131338262c133786a0787f1d561c988ae974905c903fecea593a51e.scope - libcontainer container 6a3b075c0131338262c133786a0787f1d561c988ae974905c903fecea593a51e. Jan 13 20:34:28.603730 systemd[1]: Started cri-containerd-b8c36cd6323de464509d311cc96f1c5c5ddcfb106d85f9e34a27c072592890f3.scope - libcontainer container b8c36cd6323de464509d311cc96f1c5c5ddcfb106d85f9e34a27c072592890f3. Jan 13 20:34:28.606237 systemd[1]: Started cri-containerd-b8de200e81fccd64f9c861bca91b5aa91222885bcf7d4cf13e0a190ea58624f4.scope - libcontainer container b8de200e81fccd64f9c861bca91b5aa91222885bcf7d4cf13e0a190ea58624f4. Jan 13 20:34:28.645370 containerd[1550]: time="2025-01-13T20:34:28.645264298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:621097dad26e8d622aa6fb5eebcd6bc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8de200e81fccd64f9c861bca91b5aa91222885bcf7d4cf13e0a190ea58624f4\"" Jan 13 20:34:28.647074 containerd[1550]: time="2025-01-13T20:34:28.647049547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8c36cd6323de464509d311cc96f1c5c5ddcfb106d85f9e34a27c072592890f3\"" Jan 13 20:34:28.650593 containerd[1550]: time="2025-01-13T20:34:28.650576464Z" level=info msg="CreateContainer within sandbox \"b8de200e81fccd64f9c861bca91b5aa91222885bcf7d4cf13e0a190ea58624f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:34:28.650746 containerd[1550]: time="2025-01-13T20:34:28.650731230Z" level=info msg="CreateContainer within sandbox \"b8c36cd6323de464509d311cc96f1c5c5ddcfb106d85f9e34a27c072592890f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:34:28.656073 containerd[1550]: time="2025-01-13T20:34:28.656051665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a3b075c0131338262c133786a0787f1d561c988ae974905c903fecea593a51e\"" Jan 13 20:34:28.657105 containerd[1550]: time="2025-01-13T20:34:28.657083063Z" level=info msg="CreateContainer within sandbox \"6a3b075c0131338262c133786a0787f1d561c988ae974905c903fecea593a51e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:34:28.663429 containerd[1550]: time="2025-01-13T20:34:28.663400097Z" level=info msg="CreateContainer within sandbox \"b8de200e81fccd64f9c861bca91b5aa91222885bcf7d4cf13e0a190ea58624f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"faba8645802cab375ce23740a33cc9872a8b97067910610332de19a20e0f7301\"" Jan 13 20:34:28.664444 containerd[1550]: time="2025-01-13T20:34:28.663951014Z" level=info msg="StartContainer for \"faba8645802cab375ce23740a33cc9872a8b97067910610332de19a20e0f7301\"" Jan 13 20:34:28.665504 containerd[1550]: time="2025-01-13T20:34:28.665492308Z" level=info msg="CreateContainer within sandbox \"b8c36cd6323de464509d311cc96f1c5c5ddcfb106d85f9e34a27c072592890f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71a34ee191f5d1173ba2664b2eaefe822c1f5f386e8177f2c1890a001a096d40\"" Jan 13 20:34:28.665903 containerd[1550]: time="2025-01-13T20:34:28.665886742Z" level=info msg="StartContainer for \"71a34ee191f5d1173ba2664b2eaefe822c1f5f386e8177f2c1890a001a096d40\"" Jan 13 20:34:28.669538 containerd[1550]: time="2025-01-13T20:34:28.669521476Z" level=info msg="CreateContainer within sandbox \"6a3b075c0131338262c133786a0787f1d561c988ae974905c903fecea593a51e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e8011ad33a4724de7df236cbc07bf7830acc5ae93974a0e2440a4abe2c6cc291\"" Jan 13 20:34:28.669755 containerd[1550]: time="2025-01-13T20:34:28.669738712Z" level=info msg="StartContainer for \"e8011ad33a4724de7df236cbc07bf7830acc5ae93974a0e2440a4abe2c6cc291\"" Jan 13 20:34:28.686858 systemd[1]: Started cri-containerd-faba8645802cab375ce23740a33cc9872a8b97067910610332de19a20e0f7301.scope - libcontainer container faba8645802cab375ce23740a33cc9872a8b97067910610332de19a20e0f7301. Jan 13 20:34:28.689803 systemd[1]: Started cri-containerd-e8011ad33a4724de7df236cbc07bf7830acc5ae93974a0e2440a4abe2c6cc291.scope - libcontainer container e8011ad33a4724de7df236cbc07bf7830acc5ae93974a0e2440a4abe2c6cc291. Jan 13 20:34:28.693051 systemd[1]: Started cri-containerd-71a34ee191f5d1173ba2664b2eaefe822c1f5f386e8177f2c1890a001a096d40.scope - libcontainer container 71a34ee191f5d1173ba2664b2eaefe822c1f5f386e8177f2c1890a001a096d40. Jan 13 20:34:28.748562 containerd[1550]: time="2025-01-13T20:34:28.748539639Z" level=info msg="StartContainer for \"71a34ee191f5d1173ba2664b2eaefe822c1f5f386e8177f2c1890a001a096d40\" returns successfully" Jan 13 20:34:28.749036 containerd[1550]: time="2025-01-13T20:34:28.748553886Z" level=info msg="StartContainer for \"e8011ad33a4724de7df236cbc07bf7830acc5ae93974a0e2440a4abe2c6cc291\" returns successfully" Jan 13 20:34:28.749036 containerd[1550]: time="2025-01-13T20:34:28.748550063Z" level=info msg="StartContainer for \"faba8645802cab375ce23740a33cc9872a8b97067910610332de19a20e0f7301\" returns successfully" Jan 13 20:34:29.147693 kubelet[2415]: E0113 20:34:29.147669 2415 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:34:30.027479 kubelet[2415]: E0113 20:34:30.027455 2415 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:34:30.173162 kubelet[2415]: I0113 20:34:30.173142 2415 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:34:30.181976 kubelet[2415]: I0113 20:34:30.181885 2415 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:34:30.181976 kubelet[2415]: E0113 20:34:30.181908 2415 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 13 20:34:30.186428 kubelet[2415]: E0113 20:34:30.186393 2415 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:30.286886 kubelet[2415]: E0113 20:34:30.286867 2415 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:30.387904 kubelet[2415]: E0113 20:34:30.387875 2415 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:30.488540 kubelet[2415]: E0113 20:34:30.488512 2415 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:31.011098 kubelet[2415]: I0113 20:34:31.011076 2415 apiserver.go:52] "Watching apiserver" Jan 13 20:34:31.019125 kubelet[2415]: I0113 20:34:31.019099 2415 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:34:31.675591 systemd[1]: Reloading requested from client PID 2688 ('systemctl') (unit session-9.scope)... Jan 13 20:34:31.675606 systemd[1]: Reloading... Jan 13 20:34:31.724780 zram_generator::config[2728]: No configuration found. Jan 13 20:34:31.795120 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 20:34:31.811348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:34:31.864739 systemd[1]: Reloading finished in 188 ms. Jan 13 20:34:31.888309 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:31.899423 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:34:31.899556 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:31.902992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:32.101714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:32.105077 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:34:32.150060 kubelet[2793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:32.150060 kubelet[2793]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:34:32.150060 kubelet[2793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:32.150060 kubelet[2793]: I0113 20:34:32.149891 2793 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:34:32.164775 kubelet[2793]: I0113 20:34:32.164749 2793 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:34:32.164775 kubelet[2793]: I0113 20:34:32.164775 2793 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:34:32.164911 kubelet[2793]: I0113 20:34:32.164899 2793 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:34:32.165827 kubelet[2793]: I0113 20:34:32.165644 2793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:34:32.166799 kubelet[2793]: I0113 20:34:32.166703 2793 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:34:32.181671 kubelet[2793]: E0113 20:34:32.181558 2793 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:34:32.181671 kubelet[2793]: I0113 20:34:32.181582 2793 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:34:32.183739 kubelet[2793]: I0113 20:34:32.183724 2793 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:34:32.183827 kubelet[2793]: I0113 20:34:32.183821 2793 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:34:32.183936 kubelet[2793]: I0113 20:34:32.183909 2793 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:34:32.184074 kubelet[2793]: I0113 20:34:32.183937 2793 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:34:32.184148 kubelet[2793]: I0113 20:34:32.184077 2793 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:34:32.184148 kubelet[2793]: I0113 20:34:32.184085 2793 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:34:32.184148 kubelet[2793]: I0113 20:34:32.184108 2793 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:32.184813 kubelet[2793]: I0113 20:34:32.184185 2793 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:34:32.184813 kubelet[2793]: I0113 20:34:32.184196 2793 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:34:32.184813 kubelet[2793]: I0113 20:34:32.184228 2793 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:34:32.184813 kubelet[2793]: I0113 20:34:32.184249 2793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:34:32.185652 kubelet[2793]: I0113 20:34:32.185521 2793 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:34:32.187642 kubelet[2793]: I0113 20:34:32.187396 2793 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:34:32.201599 kubelet[2793]: I0113 20:34:32.201586 2793 server.go:1269] "Started kubelet" Jan 13 20:34:32.202544 kubelet[2793]: I0113 20:34:32.202535 2793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:34:32.215199 kubelet[2793]: I0113 20:34:32.215185 2793 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:34:32.215446 kubelet[2793]: E0113 20:34:32.215433 2793 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:32.216283 kubelet[2793]: I0113 20:34:32.216274 2793 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:34:32.216460 kubelet[2793]: I0113 20:34:32.216391 2793 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:34:32.217496 kubelet[2793]: I0113 20:34:32.217477 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:34:32.218037 kubelet[2793]: I0113 20:34:32.218008 2793 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:34:32.218258 kubelet[2793]: I0113 20:34:32.218198 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:34:32.218258 kubelet[2793]: I0113 20:34:32.218216 2793 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:34:32.218258 kubelet[2793]: I0113 20:34:32.218229 2793 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:34:32.218334 kubelet[2793]: E0113 20:34:32.218253 2793 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:34:32.221345 kubelet[2793]: I0113 20:34:32.220992 2793 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:34:32.221534 kubelet[2793]: I0113 20:34:32.221493 2793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:34:32.221657 kubelet[2793]: I0113 20:34:32.221645 2793 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:34:32.221821 kubelet[2793]: I0113 20:34:32.221808 2793 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:34:32.222140 kubelet[2793]: I0113 20:34:32.222126 2793 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:34:32.222206 kubelet[2793]: I0113 20:34:32.222193 2793 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:34:32.224197 kubelet[2793]: I0113 20:34:32.224186 2793 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:34:32.256580 kubelet[2793]: I0113 20:34:32.256567 2793 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:34:32.256580 kubelet[2793]: I0113 20:34:32.256576 2793 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:34:32.256580 kubelet[2793]: I0113 20:34:32.256587 2793 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:32.256696 kubelet[2793]: I0113 20:34:32.256663 2793 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:34:32.256696 kubelet[2793]: I0113 20:34:32.256669 2793 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:34:32.256696 kubelet[2793]: I0113 20:34:32.256680 2793 policy_none.go:49] "None policy: Start" Jan 13 20:34:32.257024 kubelet[2793]: I0113 20:34:32.257011 2793 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:34:32.257024 kubelet[2793]: I0113 20:34:32.257024 2793 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:34:32.257122 kubelet[2793]: I0113 20:34:32.257112 2793 state_mem.go:75] "Updated machine memory state" Jan 13 20:34:32.259827 kubelet[2793]: I0113 20:34:32.259738 2793 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:34:32.259863 kubelet[2793]: I0113 20:34:32.259834 2793 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:34:32.259863 kubelet[2793]: I0113 20:34:32.259842 2793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:34:32.259987 kubelet[2793]: I0113 20:34:32.259977 2793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:34:32.361560 kubelet[2793]: I0113 20:34:32.361489 2793 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:34:32.366102 kubelet[2793]: I0113 20:34:32.366005 2793 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 20:34:32.366102 kubelet[2793]: I0113 20:34:32.366067 2793 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:34:32.417997 kubelet[2793]: I0113 20:34:32.417975 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:32.418086 kubelet[2793]: I0113 20:34:32.418025 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:32.418086 kubelet[2793]: I0113 20:34:32.418040 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:34:32.418086 kubelet[2793]: I0113 20:34:32.418049 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/621097dad26e8d622aa6fb5eebcd6bc2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"621097dad26e8d622aa6fb5eebcd6bc2\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:32.418086 kubelet[2793]: I0113 20:34:32.418062 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/621097dad26e8d622aa6fb5eebcd6bc2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"621097dad26e8d622aa6fb5eebcd6bc2\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:32.418154 kubelet[2793]: I0113 20:34:32.418096 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:32.418154 kubelet[2793]: I0113 20:34:32.418107 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:32.418154 kubelet[2793]: I0113 20:34:32.418117 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/621097dad26e8d622aa6fb5eebcd6bc2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"621097dad26e8d622aa6fb5eebcd6bc2\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:32.418154 kubelet[2793]: I0113 20:34:32.418125 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:33.189069 kubelet[2793]: I0113 20:34:33.188938 2793 apiserver.go:52] "Watching apiserver" Jan 13 20:34:33.218081 kubelet[2793]: I0113 20:34:33.217914 2793 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:34:33.277420 kubelet[2793]: I0113 20:34:33.277360 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.277347181 podStartE2EDuration="1.277347181s" podCreationTimestamp="2025-01-13 20:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:33.261105137 +0000 UTC m=+1.144504988" watchObservedRunningTime="2025-01-13 20:34:33.277347181 +0000 UTC m=+1.160747024" Jan 13 20:34:33.303043 kubelet[2793]: I0113 20:34:33.302957 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.30294304 podStartE2EDuration="1.30294304s" podCreationTimestamp="2025-01-13 20:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:33.277834269 +0000 UTC m=+1.161234120" watchObservedRunningTime="2025-01-13 20:34:33.30294304 +0000 UTC m=+1.186342879" Jan 13 20:34:33.312527 kubelet[2793]: I0113 20:34:33.312397 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.312382909 podStartE2EDuration="1.312382909s" podCreationTimestamp="2025-01-13 20:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:33.303224929 +0000 UTC m=+1.186624780" watchObservedRunningTime="2025-01-13 20:34:33.312382909 +0000 UTC m=+1.195782751" Jan 13 20:34:36.544036 sudo[1854]: pam_unix(sudo:session): session closed for user root Jan 13 20:34:36.545150 sshd[1853]: Connection closed by 147.75.109.163 port 47102 Jan 13 20:34:36.545872 sshd-session[1851]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:36.547980 systemd[1]: sshd@6-139.178.70.110:22-147.75.109.163:47102.service: Deactivated successfully. Jan 13 20:34:36.548987 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:34:36.549097 systemd[1]: session-9.scope: Consumed 2.987s CPU time, 136.0M memory peak, 0B memory swap peak. Jan 13 20:34:36.549411 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:34:36.550182 systemd-logind[1524]: Removed session 9. Jan 13 20:34:37.878261 kubelet[2793]: I0113 20:34:37.878234 2793 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:34:37.878498 containerd[1550]: time="2025-01-13T20:34:37.878424668Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:34:37.878624 kubelet[2793]: I0113 20:34:37.878507 2793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:34:38.770836 systemd[1]: Created slice kubepods-besteffort-pod08fefe7b_18b1_4f95_80b6_301a92a2de5a.slice - libcontainer container kubepods-besteffort-pod08fefe7b_18b1_4f95_80b6_301a92a2de5a.slice. Jan 13 20:34:38.855156 kubelet[2793]: I0113 20:34:38.855077 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/08fefe7b-18b1-4f95-80b6-301a92a2de5a-kube-proxy\") pod \"kube-proxy-gzczg\" (UID: \"08fefe7b-18b1-4f95-80b6-301a92a2de5a\") " pod="kube-system/kube-proxy-gzczg" Jan 13 20:34:38.855156 kubelet[2793]: I0113 20:34:38.855108 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08fefe7b-18b1-4f95-80b6-301a92a2de5a-xtables-lock\") pod \"kube-proxy-gzczg\" (UID: \"08fefe7b-18b1-4f95-80b6-301a92a2de5a\") " pod="kube-system/kube-proxy-gzczg" Jan 13 20:34:38.855156 kubelet[2793]: I0113 20:34:38.855119 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08fefe7b-18b1-4f95-80b6-301a92a2de5a-lib-modules\") pod \"kube-proxy-gzczg\" (UID: \"08fefe7b-18b1-4f95-80b6-301a92a2de5a\") " pod="kube-system/kube-proxy-gzczg" Jan 13 20:34:38.855156 kubelet[2793]: I0113 20:34:38.855129 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-785bw\" (UniqueName: \"kubernetes.io/projected/08fefe7b-18b1-4f95-80b6-301a92a2de5a-kube-api-access-785bw\") pod \"kube-proxy-gzczg\" (UID: \"08fefe7b-18b1-4f95-80b6-301a92a2de5a\") " pod="kube-system/kube-proxy-gzczg" Jan 13 20:34:38.933271 systemd[1]: Created slice kubepods-besteffort-pod2c5bcfb2_2d90_435e_ba73_ad617a6a3650.slice - libcontainer container kubepods-besteffort-pod2c5bcfb2_2d90_435e_ba73_ad617a6a3650.slice. Jan 13 20:34:38.959621 kubelet[2793]: I0113 20:34:38.959478 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c5bcfb2-2d90-435e-ba73-ad617a6a3650-var-lib-calico\") pod \"tigera-operator-76c4976dd7-hvjwd\" (UID: \"2c5bcfb2-2d90-435e-ba73-ad617a6a3650\") " pod="tigera-operator/tigera-operator-76c4976dd7-hvjwd" Jan 13 20:34:38.959621 kubelet[2793]: I0113 20:34:38.959509 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwfv7\" (UniqueName: \"kubernetes.io/projected/2c5bcfb2-2d90-435e-ba73-ad617a6a3650-kube-api-access-cwfv7\") pod \"tigera-operator-76c4976dd7-hvjwd\" (UID: \"2c5bcfb2-2d90-435e-ba73-ad617a6a3650\") " pod="tigera-operator/tigera-operator-76c4976dd7-hvjwd" Jan 13 20:34:39.078567 containerd[1550]: time="2025-01-13T20:34:39.078520860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gzczg,Uid:08fefe7b-18b1-4f95-80b6-301a92a2de5a,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:39.142368 containerd[1550]: time="2025-01-13T20:34:39.142276880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:39.142468 containerd[1550]: time="2025-01-13T20:34:39.142327174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:39.142561 containerd[1550]: time="2025-01-13T20:34:39.142481136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:39.143380 containerd[1550]: time="2025-01-13T20:34:39.143334016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:39.161873 systemd[1]: Started cri-containerd-92a4ff217da75b749ceaa92bc3020d4381580df3eaef3905d25b80cad3e7b602.scope - libcontainer container 92a4ff217da75b749ceaa92bc3020d4381580df3eaef3905d25b80cad3e7b602. Jan 13 20:34:39.176296 containerd[1550]: time="2025-01-13T20:34:39.176189648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gzczg,Uid:08fefe7b-18b1-4f95-80b6-301a92a2de5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"92a4ff217da75b749ceaa92bc3020d4381580df3eaef3905d25b80cad3e7b602\"" Jan 13 20:34:39.177994 containerd[1550]: time="2025-01-13T20:34:39.177920335Z" level=info msg="CreateContainer within sandbox \"92a4ff217da75b749ceaa92bc3020d4381580df3eaef3905d25b80cad3e7b602\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:34:39.186150 containerd[1550]: time="2025-01-13T20:34:39.186116033Z" level=info msg="CreateContainer within sandbox \"92a4ff217da75b749ceaa92bc3020d4381580df3eaef3905d25b80cad3e7b602\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f06cfd219e91d6af81a3cf604e26f7445815c53aba3a11190c4719b229d47934\"" Jan 13 20:34:39.186783 containerd[1550]: time="2025-01-13T20:34:39.186659576Z" level=info msg="StartContainer for \"f06cfd219e91d6af81a3cf604e26f7445815c53aba3a11190c4719b229d47934\"" Jan 13 20:34:39.207880 systemd[1]: Started cri-containerd-f06cfd219e91d6af81a3cf604e26f7445815c53aba3a11190c4719b229d47934.scope - libcontainer container f06cfd219e91d6af81a3cf604e26f7445815c53aba3a11190c4719b229d47934. Jan 13 20:34:39.225168 containerd[1550]: time="2025-01-13T20:34:39.225147918Z" level=info msg="StartContainer for \"f06cfd219e91d6af81a3cf604e26f7445815c53aba3a11190c4719b229d47934\" returns successfully" Jan 13 20:34:39.235149 containerd[1550]: time="2025-01-13T20:34:39.235128220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hvjwd,Uid:2c5bcfb2-2d90-435e-ba73-ad617a6a3650,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:34:39.332802 containerd[1550]: time="2025-01-13T20:34:39.332254840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:39.332802 containerd[1550]: time="2025-01-13T20:34:39.332299713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:39.332802 containerd[1550]: time="2025-01-13T20:34:39.332311897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:39.332942 containerd[1550]: time="2025-01-13T20:34:39.332463301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:39.347884 systemd[1]: Started cri-containerd-becd608ba8b40933928c6666830bedb1a1955a692abc0b8f2a70cc52bc2d3b1f.scope - libcontainer container becd608ba8b40933928c6666830bedb1a1955a692abc0b8f2a70cc52bc2d3b1f. Jan 13 20:34:39.377447 containerd[1550]: time="2025-01-13T20:34:39.377417543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hvjwd,Uid:2c5bcfb2-2d90-435e-ba73-ad617a6a3650,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"becd608ba8b40933928c6666830bedb1a1955a692abc0b8f2a70cc52bc2d3b1f\"" Jan 13 20:34:39.378539 containerd[1550]: time="2025-01-13T20:34:39.378462394Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:34:39.935923 kubelet[2793]: I0113 20:34:39.935837 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzczg" podStartSLOduration=1.935826327 podStartE2EDuration="1.935826327s" podCreationTimestamp="2025-01-13 20:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:39.282601313 +0000 UTC m=+7.166001156" watchObservedRunningTime="2025-01-13 20:34:39.935826327 +0000 UTC m=+7.819226172" Jan 13 20:34:39.968145 systemd[1]: run-containerd-runc-k8s.io-92a4ff217da75b749ceaa92bc3020d4381580df3eaef3905d25b80cad3e7b602-runc.2DjJtJ.mount: Deactivated successfully. Jan 13 20:34:40.958262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004196652.mount: Deactivated successfully. Jan 13 20:34:41.300579 containerd[1550]: time="2025-01-13T20:34:41.300353836Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:41.300917 containerd[1550]: time="2025-01-13T20:34:41.300895303Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764309" Jan 13 20:34:41.301266 containerd[1550]: time="2025-01-13T20:34:41.301133994Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:41.302324 containerd[1550]: time="2025-01-13T20:34:41.302309614Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:41.302835 containerd[1550]: time="2025-01-13T20:34:41.302819655Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.924162855s" Jan 13 20:34:41.302869 containerd[1550]: time="2025-01-13T20:34:41.302834886Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 20:34:41.304276 containerd[1550]: time="2025-01-13T20:34:41.304258262Z" level=info msg="CreateContainer within sandbox \"becd608ba8b40933928c6666830bedb1a1955a692abc0b8f2a70cc52bc2d3b1f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:34:41.316288 containerd[1550]: time="2025-01-13T20:34:41.316260182Z" level=info msg="CreateContainer within sandbox \"becd608ba8b40933928c6666830bedb1a1955a692abc0b8f2a70cc52bc2d3b1f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d921dd24d5383bcabe1f122e2b8f8f789c490afcc903a7d213a11608ec0ef315\"" Jan 13 20:34:41.316821 containerd[1550]: time="2025-01-13T20:34:41.316750634Z" level=info msg="StartContainer for \"d921dd24d5383bcabe1f122e2b8f8f789c490afcc903a7d213a11608ec0ef315\"" Jan 13 20:34:41.337847 systemd[1]: Started cri-containerd-d921dd24d5383bcabe1f122e2b8f8f789c490afcc903a7d213a11608ec0ef315.scope - libcontainer container d921dd24d5383bcabe1f122e2b8f8f789c490afcc903a7d213a11608ec0ef315. Jan 13 20:34:41.362186 containerd[1550]: time="2025-01-13T20:34:41.362155572Z" level=info msg="StartContainer for \"d921dd24d5383bcabe1f122e2b8f8f789c490afcc903a7d213a11608ec0ef315\" returns successfully" Jan 13 20:34:43.232174 kubelet[2793]: I0113 20:34:43.231703 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-hvjwd" podStartSLOduration=3.306403781 podStartE2EDuration="5.231689005s" podCreationTimestamp="2025-01-13 20:34:38 +0000 UTC" firstStartedPulling="2025-01-13 20:34:39.378194058 +0000 UTC m=+7.261593901" lastFinishedPulling="2025-01-13 20:34:41.303479283 +0000 UTC m=+9.186879125" observedRunningTime="2025-01-13 20:34:42.272920291 +0000 UTC m=+10.156320133" watchObservedRunningTime="2025-01-13 20:34:43.231689005 +0000 UTC m=+11.115088856" Jan 13 20:34:44.184626 systemd[1]: Created slice kubepods-besteffort-poddd99b580_ac8b_47fa_823e_e79784d16d2a.slice - libcontainer container kubepods-besteffort-poddd99b580_ac8b_47fa_823e_e79784d16d2a.slice. Jan 13 20:34:44.195348 kubelet[2793]: I0113 20:34:44.194618 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd99b580-ac8b-47fa-823e-e79784d16d2a-typha-certs\") pod \"calico-typha-59dffbc888-l62qr\" (UID: \"dd99b580-ac8b-47fa-823e-e79784d16d2a\") " pod="calico-system/calico-typha-59dffbc888-l62qr" Jan 13 20:34:44.196022 kubelet[2793]: I0113 20:34:44.195476 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcqt\" (UniqueName: \"kubernetes.io/projected/dd99b580-ac8b-47fa-823e-e79784d16d2a-kube-api-access-jjcqt\") pod \"calico-typha-59dffbc888-l62qr\" (UID: \"dd99b580-ac8b-47fa-823e-e79784d16d2a\") " pod="calico-system/calico-typha-59dffbc888-l62qr" Jan 13 20:34:44.196022 kubelet[2793]: I0113 20:34:44.195499 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd99b580-ac8b-47fa-823e-e79784d16d2a-tigera-ca-bundle\") pod \"calico-typha-59dffbc888-l62qr\" (UID: \"dd99b580-ac8b-47fa-823e-e79784d16d2a\") " pod="calico-system/calico-typha-59dffbc888-l62qr" Jan 13 20:34:44.288727 systemd[1]: Created slice kubepods-besteffort-podf82f1236_af1c_4a0a_bfa7_be59ec525b6a.slice - libcontainer container kubepods-besteffort-podf82f1236_af1c_4a0a_bfa7_be59ec525b6a.slice. Jan 13 20:34:44.296754 kubelet[2793]: I0113 20:34:44.295927 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-lib-modules\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.296754 kubelet[2793]: I0113 20:34:44.295952 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-policysync\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.296754 kubelet[2793]: I0113 20:34:44.295961 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-cni-log-dir\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.296754 kubelet[2793]: I0113 20:34:44.295983 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-tigera-ca-bundle\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.296754 kubelet[2793]: I0113 20:34:44.296000 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-node-certs\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.297081 kubelet[2793]: I0113 20:34:44.296016 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-flexvol-driver-host\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.297081 kubelet[2793]: I0113 20:34:44.296042 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-var-run-calico\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.297081 kubelet[2793]: I0113 20:34:44.296053 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-cni-net-dir\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.297081 kubelet[2793]: I0113 20:34:44.296063 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-xtables-lock\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.297081 kubelet[2793]: I0113 20:34:44.296071 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-cni-bin-dir\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.297164 kubelet[2793]: I0113 20:34:44.296080 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-var-lib-calico\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.297164 kubelet[2793]: I0113 20:34:44.296089 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8jhd\" (UniqueName: \"kubernetes.io/projected/f82f1236-af1c-4a0a-bfa7-be59ec525b6a-kube-api-access-s8jhd\") pod \"calico-node-9pxd2\" (UID: \"f82f1236-af1c-4a0a-bfa7-be59ec525b6a\") " pod="calico-system/calico-node-9pxd2" Jan 13 20:34:44.387096 kubelet[2793]: E0113 20:34:44.387060 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:44.397240 kubelet[2793]: I0113 20:34:44.396447 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/33616460-2f9b-481c-b406-8a3838ed8c9e-varrun\") pod \"csi-node-driver-9s7kx\" (UID: \"33616460-2f9b-481c-b406-8a3838ed8c9e\") " pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:44.397240 kubelet[2793]: I0113 20:34:44.396484 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/33616460-2f9b-481c-b406-8a3838ed8c9e-socket-dir\") pod \"csi-node-driver-9s7kx\" (UID: \"33616460-2f9b-481c-b406-8a3838ed8c9e\") " pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:44.397240 kubelet[2793]: I0113 20:34:44.396516 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4hrs\" (UniqueName: \"kubernetes.io/projected/33616460-2f9b-481c-b406-8a3838ed8c9e-kube-api-access-j4hrs\") pod \"csi-node-driver-9s7kx\" (UID: \"33616460-2f9b-481c-b406-8a3838ed8c9e\") " pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:44.397240 kubelet[2793]: I0113 20:34:44.396545 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33616460-2f9b-481c-b406-8a3838ed8c9e-kubelet-dir\") pod \"csi-node-driver-9s7kx\" (UID: \"33616460-2f9b-481c-b406-8a3838ed8c9e\") " pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:44.397240 kubelet[2793]: I0113 20:34:44.396563 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/33616460-2f9b-481c-b406-8a3838ed8c9e-registration-dir\") pod \"csi-node-driver-9s7kx\" (UID: \"33616460-2f9b-481c-b406-8a3838ed8c9e\") " pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:44.404964 kubelet[2793]: E0113 20:34:44.404858 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.404964 kubelet[2793]: W0113 20:34:44.404877 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.404964 kubelet[2793]: E0113 20:34:44.404891 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.406394 kubelet[2793]: E0113 20:34:44.406384 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.406465 kubelet[2793]: W0113 20:34:44.406456 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.406513 kubelet[2793]: E0113 20:34:44.406499 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.489169 containerd[1550]: time="2025-01-13T20:34:44.489105580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59dffbc888-l62qr,Uid:dd99b580-ac8b-47fa-823e-e79784d16d2a,Namespace:calico-system,Attempt:0,}" Jan 13 20:34:44.497705 kubelet[2793]: E0113 20:34:44.497691 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.497850 kubelet[2793]: W0113 20:34:44.497840 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.497947 kubelet[2793]: E0113 20:34:44.497891 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.498105 kubelet[2793]: E0113 20:34:44.498099 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.498154 kubelet[2793]: W0113 20:34:44.498137 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.498234 kubelet[2793]: E0113 20:34:44.498212 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.498399 kubelet[2793]: E0113 20:34:44.498385 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.498399 kubelet[2793]: W0113 20:34:44.498394 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.498450 kubelet[2793]: E0113 20:34:44.498406 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.498605 kubelet[2793]: E0113 20:34:44.498595 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.498605 kubelet[2793]: W0113 20:34:44.498604 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.498750 kubelet[2793]: E0113 20:34:44.498616 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.498750 kubelet[2793]: E0113 20:34:44.498709 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.498750 kubelet[2793]: W0113 20:34:44.498713 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.498750 kubelet[2793]: E0113 20:34:44.498721 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.499231 kubelet[2793]: E0113 20:34:44.498854 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.499231 kubelet[2793]: W0113 20:34:44.498859 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.499231 kubelet[2793]: E0113 20:34:44.498866 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.499231 kubelet[2793]: E0113 20:34:44.499131 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.499231 kubelet[2793]: W0113 20:34:44.499138 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.499231 kubelet[2793]: E0113 20:34:44.499149 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.499608 kubelet[2793]: E0113 20:34:44.499470 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.499608 kubelet[2793]: W0113 20:34:44.499476 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.499608 kubelet[2793]: E0113 20:34:44.499486 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.499803 kubelet[2793]: E0113 20:34:44.499694 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.499803 kubelet[2793]: W0113 20:34:44.499699 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.500332 kubelet[2793]: E0113 20:34:44.499962 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.500332 kubelet[2793]: E0113 20:34:44.500317 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.500332 kubelet[2793]: W0113 20:34:44.500322 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.500414 kubelet[2793]: E0113 20:34:44.500376 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.500462 kubelet[2793]: E0113 20:34:44.500455 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.500462 kubelet[2793]: W0113 20:34:44.500461 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.500568 kubelet[2793]: E0113 20:34:44.500532 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.500729 kubelet[2793]: E0113 20:34:44.500634 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.500729 kubelet[2793]: W0113 20:34:44.500640 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.500729 kubelet[2793]: E0113 20:34:44.500648 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.500820 kubelet[2793]: E0113 20:34:44.500739 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.500820 kubelet[2793]: W0113 20:34:44.500744 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.500820 kubelet[2793]: E0113 20:34:44.500748 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.501892 kubelet[2793]: E0113 20:34:44.500840 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.501892 kubelet[2793]: W0113 20:34:44.500844 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.501892 kubelet[2793]: E0113 20:34:44.500849 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.501892 kubelet[2793]: E0113 20:34:44.500937 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.501892 kubelet[2793]: W0113 20:34:44.500941 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.501892 kubelet[2793]: E0113 20:34:44.500945 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.501892 kubelet[2793]: E0113 20:34:44.501309 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.501892 kubelet[2793]: W0113 20:34:44.501368 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.501892 kubelet[2793]: E0113 20:34:44.501380 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.501892 kubelet[2793]: E0113 20:34:44.501489 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.502061 kubelet[2793]: W0113 20:34:44.501494 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.502061 kubelet[2793]: E0113 20:34:44.501508 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.502061 kubelet[2793]: E0113 20:34:44.501802 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.502061 kubelet[2793]: W0113 20:34:44.501809 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.502061 kubelet[2793]: E0113 20:34:44.501848 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.502429 kubelet[2793]: E0113 20:34:44.502174 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.502429 kubelet[2793]: W0113 20:34:44.502180 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.502429 kubelet[2793]: E0113 20:34:44.502195 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.502429 kubelet[2793]: E0113 20:34:44.502283 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.502429 kubelet[2793]: W0113 20:34:44.502287 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.502429 kubelet[2793]: E0113 20:34:44.502299 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.502645 kubelet[2793]: E0113 20:34:44.502609 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.502645 kubelet[2793]: W0113 20:34:44.502615 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.502645 kubelet[2793]: E0113 20:34:44.502621 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.502787 kubelet[2793]: E0113 20:34:44.502741 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.502851 kubelet[2793]: W0113 20:34:44.502836 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.503041 kubelet[2793]: E0113 20:34:44.503027 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.503285 kubelet[2793]: E0113 20:34:44.503274 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.503285 kubelet[2793]: W0113 20:34:44.503283 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.503328 kubelet[2793]: E0113 20:34:44.503291 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.503674 kubelet[2793]: E0113 20:34:44.503663 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.503674 kubelet[2793]: W0113 20:34:44.503671 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.503718 kubelet[2793]: E0113 20:34:44.503678 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.505381 kubelet[2793]: E0113 20:34:44.505316 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.505381 kubelet[2793]: W0113 20:34:44.505323 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.505381 kubelet[2793]: E0113 20:34:44.505330 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.509110 kubelet[2793]: E0113 20:34:44.508912 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.509110 kubelet[2793]: W0113 20:34:44.508921 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.509110 kubelet[2793]: E0113 20:34:44.509018 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.509313 containerd[1550]: time="2025-01-13T20:34:44.509257396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:44.509359 containerd[1550]: time="2025-01-13T20:34:44.509340538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:44.509385 containerd[1550]: time="2025-01-13T20:34:44.509365696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:44.509741 containerd[1550]: time="2025-01-13T20:34:44.509685711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:44.524864 systemd[1]: Started cri-containerd-0067bb8a69c40cad60f7a3d6b44d60d73a9f8443a06a692c5302061743c5e17c.scope - libcontainer container 0067bb8a69c40cad60f7a3d6b44d60d73a9f8443a06a692c5302061743c5e17c. Jan 13 20:34:44.555617 containerd[1550]: time="2025-01-13T20:34:44.555576799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59dffbc888-l62qr,Uid:dd99b580-ac8b-47fa-823e-e79784d16d2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0067bb8a69c40cad60f7a3d6b44d60d73a9f8443a06a692c5302061743c5e17c\"" Jan 13 20:34:44.556809 containerd[1550]: time="2025-01-13T20:34:44.556797564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:34:44.591538 containerd[1550]: time="2025-01-13T20:34:44.591506061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9pxd2,Uid:f82f1236-af1c-4a0a-bfa7-be59ec525b6a,Namespace:calico-system,Attempt:0,}" Jan 13 20:34:44.776948 containerd[1550]: time="2025-01-13T20:34:44.776674193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:44.776948 containerd[1550]: time="2025-01-13T20:34:44.776712383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:44.776948 containerd[1550]: time="2025-01-13T20:34:44.776722214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:44.776948 containerd[1550]: time="2025-01-13T20:34:44.776833369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:44.793860 systemd[1]: Started cri-containerd-8d5b75b11619e951fee45353b5533f356a7e5db64338aa368d470ba165c40682.scope - libcontainer container 8d5b75b11619e951fee45353b5533f356a7e5db64338aa368d470ba165c40682. Jan 13 20:34:44.810034 containerd[1550]: time="2025-01-13T20:34:44.809186597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9pxd2,Uid:f82f1236-af1c-4a0a-bfa7-be59ec525b6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d5b75b11619e951fee45353b5533f356a7e5db64338aa368d470ba165c40682\"" Jan 13 20:34:45.842780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603939131.mount: Deactivated successfully. Jan 13 20:34:46.291179 kubelet[2793]: E0113 20:34:46.291152 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:46.330140 containerd[1550]: time="2025-01-13T20:34:46.330117938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:46.330792 containerd[1550]: time="2025-01-13T20:34:46.330741006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 20:34:46.331240 containerd[1550]: time="2025-01-13T20:34:46.331219664Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:46.332148 containerd[1550]: time="2025-01-13T20:34:46.332126631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:46.332616 containerd[1550]: time="2025-01-13T20:34:46.332524413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.7755644s" Jan 13 20:34:46.332616 containerd[1550]: time="2025-01-13T20:34:46.332541985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 20:34:46.333353 containerd[1550]: time="2025-01-13T20:34:46.333338322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:34:46.339341 containerd[1550]: time="2025-01-13T20:34:46.339327128Z" level=info msg="CreateContainer within sandbox \"0067bb8a69c40cad60f7a3d6b44d60d73a9f8443a06a692c5302061743c5e17c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:34:46.348999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168102345.mount: Deactivated successfully. Jan 13 20:34:46.349526 containerd[1550]: time="2025-01-13T20:34:46.349373940Z" level=info msg="CreateContainer within sandbox \"0067bb8a69c40cad60f7a3d6b44d60d73a9f8443a06a692c5302061743c5e17c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e77c1ebff961a6e00381a96332059cd6fbe5e70e87efbb522bce465d5813d597\"" Jan 13 20:34:46.359771 containerd[1550]: time="2025-01-13T20:34:46.359751435Z" level=info msg="StartContainer for \"e77c1ebff961a6e00381a96332059cd6fbe5e70e87efbb522bce465d5813d597\"" Jan 13 20:34:46.388845 systemd[1]: Started cri-containerd-e77c1ebff961a6e00381a96332059cd6fbe5e70e87efbb522bce465d5813d597.scope - libcontainer container e77c1ebff961a6e00381a96332059cd6fbe5e70e87efbb522bce465d5813d597. Jan 13 20:34:46.420207 containerd[1550]: time="2025-01-13T20:34:46.419999014Z" level=info msg="StartContainer for \"e77c1ebff961a6e00381a96332059cd6fbe5e70e87efbb522bce465d5813d597\" returns successfully" Jan 13 20:34:47.307689 kubelet[2793]: E0113 20:34:47.307637 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.307689 kubelet[2793]: W0113 20:34:47.307651 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.307689 kubelet[2793]: E0113 20:34:47.307662 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.308792 kubelet[2793]: E0113 20:34:47.308043 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.308792 kubelet[2793]: W0113 20:34:47.308050 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.308792 kubelet[2793]: E0113 20:34:47.308056 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.308792 kubelet[2793]: E0113 20:34:47.308382 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.308792 kubelet[2793]: W0113 20:34:47.308388 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.308792 kubelet[2793]: E0113 20:34:47.308395 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.308792 kubelet[2793]: E0113 20:34:47.308577 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.308792 kubelet[2793]: W0113 20:34:47.308583 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.308792 kubelet[2793]: E0113 20:34:47.308588 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.309374 kubelet[2793]: E0113 20:34:47.309106 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.309374 kubelet[2793]: W0113 20:34:47.309113 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.309374 kubelet[2793]: E0113 20:34:47.309120 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.309374 kubelet[2793]: E0113 20:34:47.309258 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.309374 kubelet[2793]: W0113 20:34:47.309265 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.309374 kubelet[2793]: E0113 20:34:47.309273 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.310068 kubelet[2793]: E0113 20:34:47.309577 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.310068 kubelet[2793]: W0113 20:34:47.309584 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.310068 kubelet[2793]: E0113 20:34:47.309592 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.310855 kubelet[2793]: E0113 20:34:47.310555 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.310855 kubelet[2793]: W0113 20:34:47.310564 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.310855 kubelet[2793]: E0113 20:34:47.310572 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.310855 kubelet[2793]: E0113 20:34:47.310684 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.310855 kubelet[2793]: W0113 20:34:47.310689 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.310855 kubelet[2793]: E0113 20:34:47.310694 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.311681 kubelet[2793]: E0113 20:34:47.311481 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.311681 kubelet[2793]: W0113 20:34:47.311491 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.311681 kubelet[2793]: E0113 20:34:47.311502 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.311681 kubelet[2793]: E0113 20:34:47.311629 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.311681 kubelet[2793]: W0113 20:34:47.311635 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.311681 kubelet[2793]: E0113 20:34:47.311640 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.312036 kubelet[2793]: E0113 20:34:47.311970 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.312036 kubelet[2793]: W0113 20:34:47.311977 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.312036 kubelet[2793]: E0113 20:34:47.311984 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.312256 kubelet[2793]: E0113 20:34:47.312182 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.312256 kubelet[2793]: W0113 20:34:47.312187 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.312256 kubelet[2793]: E0113 20:34:47.312194 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.312664 kubelet[2793]: E0113 20:34:47.312532 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.312664 kubelet[2793]: W0113 20:34:47.312538 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.312664 kubelet[2793]: E0113 20:34:47.312545 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.312963 kubelet[2793]: E0113 20:34:47.312836 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.312963 kubelet[2793]: W0113 20:34:47.312842 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.312963 kubelet[2793]: E0113 20:34:47.312848 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.313168 kubelet[2793]: I0113 20:34:47.313137 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59dffbc888-l62qr" podStartSLOduration=1.5367724950000001 podStartE2EDuration="3.313129765s" podCreationTimestamp="2025-01-13 20:34:44 +0000 UTC" firstStartedPulling="2025-01-13 20:34:44.556640044 +0000 UTC m=+12.440039886" lastFinishedPulling="2025-01-13 20:34:46.332997314 +0000 UTC m=+14.216397156" observedRunningTime="2025-01-13 20:34:47.31031018 +0000 UTC m=+15.193710034" watchObservedRunningTime="2025-01-13 20:34:47.313129765 +0000 UTC m=+15.196529610" Jan 13 20:34:47.319723 kubelet[2793]: E0113 20:34:47.319672 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.319723 kubelet[2793]: W0113 20:34:47.319682 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.319723 kubelet[2793]: E0113 20:34:47.319692 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.320013 kubelet[2793]: E0113 20:34:47.319917 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.320013 kubelet[2793]: W0113 20:34:47.319924 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.320013 kubelet[2793]: E0113 20:34:47.319931 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.320088 kubelet[2793]: E0113 20:34:47.320070 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.320088 kubelet[2793]: W0113 20:34:47.320075 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.320088 kubelet[2793]: E0113 20:34:47.320081 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.320202 kubelet[2793]: E0113 20:34:47.320192 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.320202 kubelet[2793]: W0113 20:34:47.320201 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.320247 kubelet[2793]: E0113 20:34:47.320209 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.320303 kubelet[2793]: E0113 20:34:47.320293 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.320331 kubelet[2793]: W0113 20:34:47.320311 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.320331 kubelet[2793]: E0113 20:34:47.320319 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.320408 kubelet[2793]: E0113 20:34:47.320397 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.320408 kubelet[2793]: W0113 20:34:47.320405 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.320486 kubelet[2793]: E0113 20:34:47.320412 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.320511 kubelet[2793]: E0113 20:34:47.320505 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.320511 kubelet[2793]: W0113 20:34:47.320510 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.320545 kubelet[2793]: E0113 20:34:47.320521 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.320704 kubelet[2793]: E0113 20:34:47.320692 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.320704 kubelet[2793]: W0113 20:34:47.320701 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.320750 kubelet[2793]: E0113 20:34:47.320708 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.320822 kubelet[2793]: E0113 20:34:47.320811 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.320822 kubelet[2793]: W0113 20:34:47.320818 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.320869 kubelet[2793]: E0113 20:34:47.320823 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.320909 kubelet[2793]: E0113 20:34:47.320900 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.320909 kubelet[2793]: W0113 20:34:47.320907 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.320953 kubelet[2793]: E0113 20:34:47.320917 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.321031 kubelet[2793]: E0113 20:34:47.321023 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.321031 kubelet[2793]: W0113 20:34:47.321029 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.321065 kubelet[2793]: E0113 20:34:47.321040 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.321194 kubelet[2793]: E0113 20:34:47.321184 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.321194 kubelet[2793]: W0113 20:34:47.321192 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.321239 kubelet[2793]: E0113 20:34:47.321201 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.321297 kubelet[2793]: E0113 20:34:47.321286 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.321297 kubelet[2793]: W0113 20:34:47.321294 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.321345 kubelet[2793]: E0113 20:34:47.321299 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.321378 kubelet[2793]: E0113 20:34:47.321370 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.321378 kubelet[2793]: W0113 20:34:47.321376 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.321441 kubelet[2793]: E0113 20:34:47.321386 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.321497 kubelet[2793]: E0113 20:34:47.321489 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.321497 kubelet[2793]: W0113 20:34:47.321496 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.321535 kubelet[2793]: E0113 20:34:47.321507 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.321751 kubelet[2793]: E0113 20:34:47.321740 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.321751 kubelet[2793]: W0113 20:34:47.321748 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.321805 kubelet[2793]: E0113 20:34:47.321755 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.321887 kubelet[2793]: E0113 20:34:47.321877 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.321887 kubelet[2793]: W0113 20:34:47.321884 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.321931 kubelet[2793]: E0113 20:34:47.321892 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.321984 kubelet[2793]: E0113 20:34:47.321976 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.321984 kubelet[2793]: W0113 20:34:47.321983 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.322022 kubelet[2793]: E0113 20:34:47.321994 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.811365 containerd[1550]: time="2025-01-13T20:34:47.810893273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:47.811365 containerd[1550]: time="2025-01-13T20:34:47.811284134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 20:34:47.811365 containerd[1550]: time="2025-01-13T20:34:47.811343143Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:47.812529 containerd[1550]: time="2025-01-13T20:34:47.812459086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:47.812880 containerd[1550]: time="2025-01-13T20:34:47.812864153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.479509643s" Jan 13 20:34:47.812910 containerd[1550]: time="2025-01-13T20:34:47.812880827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:34:47.814658 containerd[1550]: time="2025-01-13T20:34:47.814643738Z" level=info msg="CreateContainer within sandbox \"8d5b75b11619e951fee45353b5533f356a7e5db64338aa368d470ba165c40682\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:34:47.825683 containerd[1550]: time="2025-01-13T20:34:47.825653956Z" level=info msg="CreateContainer within sandbox \"8d5b75b11619e951fee45353b5533f356a7e5db64338aa368d470ba165c40682\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230\"" Jan 13 20:34:47.826228 containerd[1550]: time="2025-01-13T20:34:47.825996665Z" level=info msg="StartContainer for \"161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230\"" Jan 13 20:34:47.855849 systemd[1]: Started cri-containerd-161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230.scope - libcontainer container 161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230. Jan 13 20:34:47.872045 containerd[1550]: time="2025-01-13T20:34:47.871970092Z" level=info msg="StartContainer for \"161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230\" returns successfully" Jan 13 20:34:47.880435 systemd[1]: cri-containerd-161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230.scope: Deactivated successfully. Jan 13 20:34:48.025972 containerd[1550]: time="2025-01-13T20:34:48.010056297Z" level=info msg="shim disconnected" id=161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230 namespace=k8s.io Jan 13 20:34:48.025972 containerd[1550]: time="2025-01-13T20:34:48.025902170Z" level=warning msg="cleaning up after shim disconnected" id=161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230 namespace=k8s.io Jan 13 20:34:48.025972 containerd[1550]: time="2025-01-13T20:34:48.025908998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:48.219903 kubelet[2793]: E0113 20:34:48.219640 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:48.317236 kubelet[2793]: I0113 20:34:48.317178 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:34:48.318447 containerd[1550]: time="2025-01-13T20:34:48.318424790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:34:48.337677 systemd[1]: run-containerd-runc-k8s.io-161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230-runc.oSWLsL.mount: Deactivated successfully. Jan 13 20:34:48.337736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-161a3965ff55786e82c96f6153ac113131dd87f19683cdb61edadf337e807230-rootfs.mount: Deactivated successfully. Jan 13 20:34:50.219831 kubelet[2793]: I0113 20:34:50.219592 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:34:50.227874 kubelet[2793]: E0113 20:34:50.221563 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:51.440796 containerd[1550]: time="2025-01-13T20:34:51.440742172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:34:51.441352 containerd[1550]: time="2025-01-13T20:34:51.441328194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:51.444448 containerd[1550]: time="2025-01-13T20:34:51.444265290Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:51.445136 containerd[1550]: time="2025-01-13T20:34:51.444676345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.126225822s" Jan 13 20:34:51.445136 containerd[1550]: time="2025-01-13T20:34:51.444705588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:34:51.445206 containerd[1550]: time="2025-01-13T20:34:51.445134686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:51.465638 containerd[1550]: time="2025-01-13T20:34:51.465376143Z" level=info msg="CreateContainer within sandbox \"8d5b75b11619e951fee45353b5533f356a7e5db64338aa368d470ba165c40682\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:34:51.489257 containerd[1550]: time="2025-01-13T20:34:51.489223245Z" level=info msg="CreateContainer within sandbox \"8d5b75b11619e951fee45353b5533f356a7e5db64338aa368d470ba165c40682\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9be7d4e091ede863736ef11b6497a9b9fc5ef662b3bc9138dc52462031d3fb69\"" Jan 13 20:34:51.490224 containerd[1550]: time="2025-01-13T20:34:51.490188773Z" level=info msg="StartContainer for \"9be7d4e091ede863736ef11b6497a9b9fc5ef662b3bc9138dc52462031d3fb69\"" Jan 13 20:34:51.576870 systemd[1]: Started cri-containerd-9be7d4e091ede863736ef11b6497a9b9fc5ef662b3bc9138dc52462031d3fb69.scope - libcontainer container 9be7d4e091ede863736ef11b6497a9b9fc5ef662b3bc9138dc52462031d3fb69. Jan 13 20:34:51.604356 containerd[1550]: time="2025-01-13T20:34:51.604291562Z" level=info msg="StartContainer for \"9be7d4e091ede863736ef11b6497a9b9fc5ef662b3bc9138dc52462031d3fb69\" returns successfully" Jan 13 20:34:52.220126 kubelet[2793]: E0113 20:34:52.220081 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:52.937497 systemd[1]: cri-containerd-9be7d4e091ede863736ef11b6497a9b9fc5ef662b3bc9138dc52462031d3fb69.scope: Deactivated successfully. Jan 13 20:34:52.956646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9be7d4e091ede863736ef11b6497a9b9fc5ef662b3bc9138dc52462031d3fb69-rootfs.mount: Deactivated successfully. Jan 13 20:34:52.959854 containerd[1550]: time="2025-01-13T20:34:52.959819205Z" level=info msg="shim disconnected" id=9be7d4e091ede863736ef11b6497a9b9fc5ef662b3bc9138dc52462031d3fb69 namespace=k8s.io Jan 13 20:34:52.961334 containerd[1550]: time="2025-01-13T20:34:52.959878145Z" level=warning msg="cleaning up after shim disconnected" id=9be7d4e091ede863736ef11b6497a9b9fc5ef662b3bc9138dc52462031d3fb69 namespace=k8s.io Jan 13 20:34:52.961334 containerd[1550]: time="2025-01-13T20:34:52.959884851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:52.972032 kubelet[2793]: I0113 20:34:52.972017 2793 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:34:53.040096 systemd[1]: Created slice kubepods-burstable-pod3c427557_4954_4485_9e6f_15dda2108a10.slice - libcontainer container kubepods-burstable-pod3c427557_4954_4485_9e6f_15dda2108a10.slice. Jan 13 20:34:53.056587 kubelet[2793]: I0113 20:34:53.056427 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8e472de-0de2-4914-879e-86eb6e7cf184-tigera-ca-bundle\") pod \"calico-kube-controllers-974644f4b-zs7h8\" (UID: \"e8e472de-0de2-4914-879e-86eb6e7cf184\") " pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:53.056587 kubelet[2793]: I0113 20:34:53.056448 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gbwl\" (UniqueName: \"kubernetes.io/projected/bf5a440e-479f-4a51-bb48-dec4cff63ae8-kube-api-access-4gbwl\") pod \"calico-apiserver-6b6bf69758-whhr8\" (UID: \"bf5a440e-479f-4a51-bb48-dec4cff63ae8\") " pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:53.056587 kubelet[2793]: I0113 20:34:53.056458 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c427557-4954-4485-9e6f-15dda2108a10-config-volume\") pod \"coredns-6f6b679f8f-wzl8j\" (UID: \"3c427557-4954-4485-9e6f-15dda2108a10\") " pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:53.056587 kubelet[2793]: I0113 20:34:53.056469 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf5a440e-479f-4a51-bb48-dec4cff63ae8-calico-apiserver-certs\") pod \"calico-apiserver-6b6bf69758-whhr8\" (UID: \"bf5a440e-479f-4a51-bb48-dec4cff63ae8\") " pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:53.056587 kubelet[2793]: I0113 20:34:53.056479 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9e9899a-a387-4873-a44f-5621625ff114-config-volume\") pod \"coredns-6f6b679f8f-9m4k5\" (UID: \"b9e9899a-a387-4873-a44f-5621625ff114\") " pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:53.056819 kubelet[2793]: I0113 20:34:53.056489 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c2088615-f28b-4452-a74d-fc3302061b14-calico-apiserver-certs\") pod \"calico-apiserver-6b6bf69758-q2k5c\" (UID: \"c2088615-f28b-4452-a74d-fc3302061b14\") " pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:53.056819 kubelet[2793]: I0113 20:34:53.056499 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l87v\" (UniqueName: \"kubernetes.io/projected/3c427557-4954-4485-9e6f-15dda2108a10-kube-api-access-2l87v\") pod \"coredns-6f6b679f8f-wzl8j\" (UID: \"3c427557-4954-4485-9e6f-15dda2108a10\") " pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:53.056819 kubelet[2793]: I0113 20:34:53.056508 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4zcd\" (UniqueName: \"kubernetes.io/projected/e8e472de-0de2-4914-879e-86eb6e7cf184-kube-api-access-q4zcd\") pod \"calico-kube-controllers-974644f4b-zs7h8\" (UID: \"e8e472de-0de2-4914-879e-86eb6e7cf184\") " pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:53.056819 kubelet[2793]: I0113 20:34:53.056517 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7x7d\" (UniqueName: \"kubernetes.io/projected/b9e9899a-a387-4873-a44f-5621625ff114-kube-api-access-r7x7d\") pod \"coredns-6f6b679f8f-9m4k5\" (UID: \"b9e9899a-a387-4873-a44f-5621625ff114\") " pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:53.056819 kubelet[2793]: I0113 20:34:53.056526 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt99x\" (UniqueName: \"kubernetes.io/projected/c2088615-f28b-4452-a74d-fc3302061b14-kube-api-access-rt99x\") pod \"calico-apiserver-6b6bf69758-q2k5c\" (UID: \"c2088615-f28b-4452-a74d-fc3302061b14\") " pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:53.058910 systemd[1]: Created slice kubepods-burstable-podb9e9899a_a387_4873_a44f_5621625ff114.slice - libcontainer container kubepods-burstable-podb9e9899a_a387_4873_a44f_5621625ff114.slice. Jan 13 20:34:53.066217 systemd[1]: Created slice kubepods-besteffort-pode8e472de_0de2_4914_879e_86eb6e7cf184.slice - libcontainer container kubepods-besteffort-pode8e472de_0de2_4914_879e_86eb6e7cf184.slice. Jan 13 20:34:53.071826 systemd[1]: Created slice kubepods-besteffort-podc2088615_f28b_4452_a74d_fc3302061b14.slice - libcontainer container kubepods-besteffort-podc2088615_f28b_4452_a74d_fc3302061b14.slice. Jan 13 20:34:53.076402 systemd[1]: Created slice kubepods-besteffort-podbf5a440e_479f_4a51_bb48_dec4cff63ae8.slice - libcontainer container kubepods-besteffort-podbf5a440e_479f_4a51_bb48_dec4cff63ae8.slice. Jan 13 20:34:53.337054 containerd[1550]: time="2025-01-13T20:34:53.336979786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:34:53.351088 containerd[1550]: time="2025-01-13T20:34:53.350990633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:53.363805 containerd[1550]: time="2025-01-13T20:34:53.363564936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:53.371465 containerd[1550]: time="2025-01-13T20:34:53.371435947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:0,}" Jan 13 20:34:53.381254 containerd[1550]: time="2025-01-13T20:34:53.381224017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:34:53.381455 containerd[1550]: time="2025-01-13T20:34:53.381435587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:34:53.678406 containerd[1550]: time="2025-01-13T20:34:53.678324617Z" level=error msg="Failed to destroy network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.684745 containerd[1550]: time="2025-01-13T20:34:53.684018593Z" level=error msg="encountered an error cleaning up failed sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.684745 containerd[1550]: time="2025-01-13T20:34:53.684086866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.684903 kubelet[2793]: E0113 20:34:53.684262 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.684903 kubelet[2793]: E0113 20:34:53.684312 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:53.684903 kubelet[2793]: E0113 20:34:53.684325 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:53.685150 kubelet[2793]: E0113 20:34:53.684357 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" podUID="bf5a440e-479f-4a51-bb48-dec4cff63ae8" Jan 13 20:34:53.690882 containerd[1550]: time="2025-01-13T20:34:53.687455597Z" level=error msg="Failed to destroy network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.690882 containerd[1550]: time="2025-01-13T20:34:53.688483979Z" level=error msg="encountered an error cleaning up failed sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.690882 containerd[1550]: time="2025-01-13T20:34:53.688521605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.692562 kubelet[2793]: E0113 20:34:53.692236 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.692562 kubelet[2793]: E0113 20:34:53.692277 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:53.692562 kubelet[2793]: E0113 20:34:53.692291 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:53.692703 kubelet[2793]: E0113 20:34:53.692335 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wzl8j" podUID="3c427557-4954-4485-9e6f-15dda2108a10" Jan 13 20:34:53.693735 containerd[1550]: time="2025-01-13T20:34:53.693309628Z" level=error msg="Failed to destroy network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.693735 containerd[1550]: time="2025-01-13T20:34:53.693504179Z" level=error msg="encountered an error cleaning up failed sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.693735 containerd[1550]: time="2025-01-13T20:34:53.693536303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.693853 kubelet[2793]: E0113 20:34:53.693640 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.693853 kubelet[2793]: E0113 20:34:53.693666 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:53.693853 kubelet[2793]: E0113 20:34:53.693681 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:53.693915 kubelet[2793]: E0113 20:34:53.693714 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" podUID="c2088615-f28b-4452-a74d-fc3302061b14" Jan 13 20:34:53.697985 containerd[1550]: time="2025-01-13T20:34:53.697689890Z" level=error msg="Failed to destroy network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.698923 containerd[1550]: time="2025-01-13T20:34:53.698841097Z" level=error msg="encountered an error cleaning up failed sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.698923 containerd[1550]: time="2025-01-13T20:34:53.698889314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.699455 kubelet[2793]: E0113 20:34:53.699258 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.699455 kubelet[2793]: E0113 20:34:53.699299 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:53.699455 kubelet[2793]: E0113 20:34:53.699313 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:53.699611 kubelet[2793]: E0113 20:34:53.699343 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" podUID="e8e472de-0de2-4914-879e-86eb6e7cf184" Jan 13 20:34:53.700349 containerd[1550]: time="2025-01-13T20:34:53.700327050Z" level=error msg="Failed to destroy network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.700556 containerd[1550]: time="2025-01-13T20:34:53.700517191Z" level=error msg="encountered an error cleaning up failed sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.700556 containerd[1550]: time="2025-01-13T20:34:53.700549859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.703219 kubelet[2793]: E0113 20:34:53.700623 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:53.703219 kubelet[2793]: E0113 20:34:53.700641 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:53.703219 kubelet[2793]: E0113 20:34:53.700650 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:53.709799 kubelet[2793]: E0113 20:34:53.700668 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9m4k5" podUID="b9e9899a-a387-4873-a44f-5621625ff114" Jan 13 20:34:54.222956 systemd[1]: Created slice kubepods-besteffort-pod33616460_2f9b_481c_b406_8a3838ed8c9e.slice - libcontainer container kubepods-besteffort-pod33616460_2f9b_481c_b406_8a3838ed8c9e.slice. Jan 13 20:34:54.224381 containerd[1550]: time="2025-01-13T20:34:54.224353937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:0,}" Jan 13 20:34:54.260783 containerd[1550]: time="2025-01-13T20:34:54.260687372Z" level=error msg="Failed to destroy network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.261098 containerd[1550]: time="2025-01-13T20:34:54.261008799Z" level=error msg="encountered an error cleaning up failed sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.261098 containerd[1550]: time="2025-01-13T20:34:54.261042722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.261453 kubelet[2793]: E0113 20:34:54.261227 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.261453 kubelet[2793]: E0113 20:34:54.261264 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:54.261453 kubelet[2793]: E0113 20:34:54.261285 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:54.261528 kubelet[2793]: E0113 20:34:54.261310 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:54.263163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f-shm.mount: Deactivated successfully. Jan 13 20:34:54.338254 kubelet[2793]: I0113 20:34:54.338226 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90" Jan 13 20:34:54.339019 containerd[1550]: time="2025-01-13T20:34:54.338848247Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\"" Jan 13 20:34:54.339077 kubelet[2793]: I0113 20:34:54.338863 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465" Jan 13 20:34:54.339376 containerd[1550]: time="2025-01-13T20:34:54.339363388Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" Jan 13 20:34:54.349997 containerd[1550]: time="2025-01-13T20:34:54.349214485Z" level=info msg="Ensure that sandbox 0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90 in task-service has been cleanup successfully" Jan 13 20:34:54.349997 containerd[1550]: time="2025-01-13T20:34:54.349351948Z" level=info msg="TearDown network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" successfully" Jan 13 20:34:54.349997 containerd[1550]: time="2025-01-13T20:34:54.349361215Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" returns successfully" Jan 13 20:34:54.349997 containerd[1550]: time="2025-01-13T20:34:54.349740147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:1,}" Jan 13 20:34:54.349997 containerd[1550]: time="2025-01-13T20:34:54.349749825Z" level=info msg="Ensure that sandbox 4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465 in task-service has been cleanup successfully" Jan 13 20:34:54.350604 kubelet[2793]: I0113 20:34:54.350412 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f" Jan 13 20:34:54.350857 containerd[1550]: time="2025-01-13T20:34:54.350698166Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\"" Jan 13 20:34:54.350857 containerd[1550]: time="2025-01-13T20:34:54.350800937Z" level=info msg="Ensure that sandbox ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f in task-service has been cleanup successfully" Jan 13 20:34:54.351086 systemd[1]: run-netns-cni\x2d6b8f3392\x2d185b\x2d0c95\x2d0aeb\x2dcf5482510663.mount: Deactivated successfully. Jan 13 20:34:54.351149 systemd[1]: run-netns-cni\x2d5f658446\x2de993\x2de59e\x2da3da\x2de5e6556cc4f9.mount: Deactivated successfully. Jan 13 20:34:54.352460 containerd[1550]: time="2025-01-13T20:34:54.352384312Z" level=info msg="TearDown network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" successfully" Jan 13 20:34:54.352460 containerd[1550]: time="2025-01-13T20:34:54.352396632Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" returns successfully" Jan 13 20:34:54.353082 containerd[1550]: time="2025-01-13T20:34:54.352726715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:1,}" Jan 13 20:34:54.353082 containerd[1550]: time="2025-01-13T20:34:54.353027248Z" level=info msg="TearDown network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" successfully" Jan 13 20:34:54.353082 containerd[1550]: time="2025-01-13T20:34:54.353036328Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" returns successfully" Jan 13 20:34:54.353296 kubelet[2793]: I0113 20:34:54.353281 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3" Jan 13 20:34:54.353507 containerd[1550]: time="2025-01-13T20:34:54.353488046Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\"" Jan 13 20:34:54.353699 containerd[1550]: time="2025-01-13T20:34:54.353567094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:1,}" Jan 13 20:34:54.353699 containerd[1550]: time="2025-01-13T20:34:54.353583119Z" level=info msg="Ensure that sandbox 676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3 in task-service has been cleanup successfully" Jan 13 20:34:54.354977 containerd[1550]: time="2025-01-13T20:34:54.354823781Z" level=info msg="TearDown network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" successfully" Jan 13 20:34:54.354977 containerd[1550]: time="2025-01-13T20:34:54.354846113Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" returns successfully" Jan 13 20:34:54.355232 containerd[1550]: time="2025-01-13T20:34:54.355199530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:34:54.355639 kubelet[2793]: I0113 20:34:54.355625 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813" Jan 13 20:34:54.356096 containerd[1550]: time="2025-01-13T20:34:54.355932778Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\"" Jan 13 20:34:54.356096 containerd[1550]: time="2025-01-13T20:34:54.356029585Z" level=info msg="Ensure that sandbox 3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813 in task-service has been cleanup successfully" Jan 13 20:34:54.356282 containerd[1550]: time="2025-01-13T20:34:54.356273398Z" level=info msg="TearDown network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" successfully" Jan 13 20:34:54.356343 containerd[1550]: time="2025-01-13T20:34:54.356305956Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" returns successfully" Jan 13 20:34:54.356510 kubelet[2793]: I0113 20:34:54.356498 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3" Jan 13 20:34:54.356681 containerd[1550]: time="2025-01-13T20:34:54.356671152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:34:54.356943 containerd[1550]: time="2025-01-13T20:34:54.356933597Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\"" Jan 13 20:34:54.357324 containerd[1550]: time="2025-01-13T20:34:54.357138014Z" level=info msg="Ensure that sandbox bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3 in task-service has been cleanup successfully" Jan 13 20:34:54.357555 containerd[1550]: time="2025-01-13T20:34:54.357448697Z" level=info msg="TearDown network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" successfully" Jan 13 20:34:54.357656 containerd[1550]: time="2025-01-13T20:34:54.357604042Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" returns successfully" Jan 13 20:34:54.357998 containerd[1550]: time="2025-01-13T20:34:54.357980850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:1,}" Jan 13 20:34:54.525692 containerd[1550]: time="2025-01-13T20:34:54.523831442Z" level=error msg="Failed to destroy network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.525692 containerd[1550]: time="2025-01-13T20:34:54.524322892Z" level=error msg="encountered an error cleaning up failed sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.525692 containerd[1550]: time="2025-01-13T20:34:54.524370918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.525842 kubelet[2793]: E0113 20:34:54.524992 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.525842 kubelet[2793]: E0113 20:34:54.525027 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:54.525842 kubelet[2793]: E0113 20:34:54.525042 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:54.525996 kubelet[2793]: E0113 20:34:54.525077 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9m4k5" podUID="b9e9899a-a387-4873-a44f-5621625ff114" Jan 13 20:34:54.538565 containerd[1550]: time="2025-01-13T20:34:54.537888347Z" level=error msg="Failed to destroy network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.538565 containerd[1550]: time="2025-01-13T20:34:54.538113975Z" level=error msg="encountered an error cleaning up failed sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.538565 containerd[1550]: time="2025-01-13T20:34:54.538147474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.539053 kubelet[2793]: E0113 20:34:54.538304 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.539053 kubelet[2793]: E0113 20:34:54.538360 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:54.539053 kubelet[2793]: E0113 20:34:54.538374 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:54.539155 kubelet[2793]: E0113 20:34:54.538402 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wzl8j" podUID="3c427557-4954-4485-9e6f-15dda2108a10" Jan 13 20:34:54.552580 containerd[1550]: time="2025-01-13T20:34:54.552435444Z" level=error msg="Failed to destroy network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.552662 containerd[1550]: time="2025-01-13T20:34:54.552643208Z" level=error msg="encountered an error cleaning up failed sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.552716 containerd[1550]: time="2025-01-13T20:34:54.552686890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.553016 kubelet[2793]: E0113 20:34:54.552808 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.553016 kubelet[2793]: E0113 20:34:54.552843 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:54.553016 kubelet[2793]: E0113 20:34:54.552867 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:54.553119 kubelet[2793]: E0113 20:34:54.552900 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:54.554569 containerd[1550]: time="2025-01-13T20:34:54.554550668Z" level=error msg="Failed to destroy network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.554753 containerd[1550]: time="2025-01-13T20:34:54.554738387Z" level=error msg="encountered an error cleaning up failed sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.554839 containerd[1550]: time="2025-01-13T20:34:54.554789796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.554895 kubelet[2793]: E0113 20:34:54.554864 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.554895 kubelet[2793]: E0113 20:34:54.554885 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:54.558816 kubelet[2793]: E0113 20:34:54.554897 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:54.558816 kubelet[2793]: E0113 20:34:54.554922 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" podUID="c2088615-f28b-4452-a74d-fc3302061b14" Jan 13 20:34:54.558816 kubelet[2793]: E0113 20:34:54.555754 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.558923 containerd[1550]: time="2025-01-13T20:34:54.555481504Z" level=error msg="Failed to destroy network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.558923 containerd[1550]: time="2025-01-13T20:34:54.555649145Z" level=error msg="encountered an error cleaning up failed sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.558923 containerd[1550]: time="2025-01-13T20:34:54.555672901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.559021 kubelet[2793]: E0113 20:34:54.555781 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:54.559021 kubelet[2793]: E0113 20:34:54.555795 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:54.559021 kubelet[2793]: E0113 20:34:54.555814 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" podUID="e8e472de-0de2-4914-879e-86eb6e7cf184" Jan 13 20:34:54.561610 containerd[1550]: time="2025-01-13T20:34:54.561353433Z" level=error msg="Failed to destroy network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.561752 containerd[1550]: time="2025-01-13T20:34:54.561732640Z" level=error msg="encountered an error cleaning up failed sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.561791 containerd[1550]: time="2025-01-13T20:34:54.561776177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.561919 kubelet[2793]: E0113 20:34:54.561895 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.562002 kubelet[2793]: E0113 20:34:54.561985 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:54.562034 kubelet[2793]: E0113 20:34:54.562004 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:54.562073 kubelet[2793]: E0113 20:34:54.562032 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" podUID="bf5a440e-479f-4a51-bb48-dec4cff63ae8" Jan 13 20:34:54.958629 systemd[1]: run-netns-cni\x2de68ad27d\x2da063\x2d6cf3\x2d3cf6\x2d00c2bf879caf.mount: Deactivated successfully. Jan 13 20:34:54.958688 systemd[1]: run-netns-cni\x2d39f8f3ae\x2ddb3c\x2d33f9\x2d2ee8\x2d796d1dce8c6a.mount: Deactivated successfully. Jan 13 20:34:54.958724 systemd[1]: run-netns-cni\x2d0ec6a473\x2d1388\x2d2937\x2d232c\x2d8482bd5827bc.mount: Deactivated successfully. Jan 13 20:34:54.958755 systemd[1]: run-netns-cni\x2d86dfd43f\x2d9def\x2d8d5a\x2d5b44\x2d6da6a6994004.mount: Deactivated successfully. Jan 13 20:34:55.359999 kubelet[2793]: I0113 20:34:55.359092 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.359508922Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\"" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.359748515Z" level=info msg="Ensure that sandbox aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2 in task-service has been cleanup successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.359915473Z" level=info msg="TearDown network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.359939197Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" returns successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.362256446Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\"" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.363033817Z" level=info msg="TearDown network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.363043955Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" returns successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.363532197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.363736747Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\"" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.363842558Z" level=info msg="Ensure that sandbox 6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96 in task-service has been cleanup successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.364018681Z" level=info msg="TearDown network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.364026848Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" returns successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.365278407Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\"" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.365313058Z" level=info msg="TearDown network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.365319040Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" returns successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.365490298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:2,}" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.366027345Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\"" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.366121840Z" level=info msg="Ensure that sandbox 413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4 in task-service has been cleanup successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.366838519Z" level=info msg="TearDown network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.366847018Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" returns successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.367658446Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\"" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.368106290Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\"" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.368147421Z" level=info msg="TearDown network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.368154373Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" returns successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.368183934Z" level=info msg="Ensure that sandbox b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b in task-service has been cleanup successfully" Jan 13 20:34:55.368535 containerd[1550]: time="2025-01-13T20:34:55.368526095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:2,}" Jan 13 20:34:55.361885 systemd[1]: run-netns-cni\x2d3f7d3eff\x2d8289\x2d1f3f\x2deb14\x2da9a98ab120c9.mount: Deactivated successfully. Jan 13 20:34:55.371416 kubelet[2793]: I0113 20:34:55.363144 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96" Jan 13 20:34:55.371416 kubelet[2793]: I0113 20:34:55.365633 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4" Jan 13 20:34:55.371416 kubelet[2793]: I0113 20:34:55.367449 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b" Jan 13 20:34:55.371484 containerd[1550]: time="2025-01-13T20:34:55.369581472Z" level=info msg="TearDown network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" successfully" Jan 13 20:34:55.371484 containerd[1550]: time="2025-01-13T20:34:55.369591660Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" returns successfully" Jan 13 20:34:55.371484 containerd[1550]: time="2025-01-13T20:34:55.370995950Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\"" Jan 13 20:34:55.371484 containerd[1550]: time="2025-01-13T20:34:55.371043791Z" level=info msg="TearDown network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" successfully" Jan 13 20:34:55.371484 containerd[1550]: time="2025-01-13T20:34:55.371050423Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" returns successfully" Jan 13 20:34:55.364999 systemd[1]: run-netns-cni\x2d72afebf6\x2d4b52\x2dbfb6\x2de291\x2db2047fc1981b.mount: Deactivated successfully. Jan 13 20:34:55.369472 systemd[1]: run-netns-cni\x2db0dcd6fe\x2d8b9e\x2d3fdd\x2d97dd\x2d8bdf9902b9c6.mount: Deactivated successfully. Jan 13 20:34:55.373318 systemd[1]: run-netns-cni\x2d76b65e23\x2d16cf\x2d4971\x2d9b00\x2da5fec0f4ca47.mount: Deactivated successfully. Jan 13 20:34:55.374309 containerd[1550]: time="2025-01-13T20:34:55.373735751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:2,}" Jan 13 20:34:55.375825 kubelet[2793]: I0113 20:34:55.375808 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b" Jan 13 20:34:55.376118 containerd[1550]: time="2025-01-13T20:34:55.376091666Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\"" Jan 13 20:34:55.376557 containerd[1550]: time="2025-01-13T20:34:55.376417948Z" level=info msg="Ensure that sandbox 91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b in task-service has been cleanup successfully" Jan 13 20:34:55.376724 containerd[1550]: time="2025-01-13T20:34:55.376713848Z" level=info msg="TearDown network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" successfully" Jan 13 20:34:55.376849 containerd[1550]: time="2025-01-13T20:34:55.376778115Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" returns successfully" Jan 13 20:34:55.377962 containerd[1550]: time="2025-01-13T20:34:55.377951036Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" Jan 13 20:34:55.378065 containerd[1550]: time="2025-01-13T20:34:55.378056046Z" level=info msg="TearDown network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" successfully" Jan 13 20:34:55.378184 containerd[1550]: time="2025-01-13T20:34:55.378096567Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" returns successfully" Jan 13 20:34:55.380395 containerd[1550]: time="2025-01-13T20:34:55.379941361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:2,}" Jan 13 20:34:55.382786 kubelet[2793]: I0113 20:34:55.382770 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c" Jan 13 20:34:55.383202 containerd[1550]: time="2025-01-13T20:34:55.383188834Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\"" Jan 13 20:34:55.383933 containerd[1550]: time="2025-01-13T20:34:55.383627938Z" level=info msg="Ensure that sandbox fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c in task-service has been cleanup successfully" Jan 13 20:34:55.384301 containerd[1550]: time="2025-01-13T20:34:55.384200655Z" level=info msg="TearDown network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" successfully" Jan 13 20:34:55.384352 containerd[1550]: time="2025-01-13T20:34:55.384343106Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" returns successfully" Jan 13 20:34:55.387442 containerd[1550]: time="2025-01-13T20:34:55.387421934Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\"" Jan 13 20:34:55.387512 containerd[1550]: time="2025-01-13T20:34:55.387477759Z" level=info msg="TearDown network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" successfully" Jan 13 20:34:55.387512 containerd[1550]: time="2025-01-13T20:34:55.387508562Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" returns successfully" Jan 13 20:34:55.387871 containerd[1550]: time="2025-01-13T20:34:55.387855683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:34:55.761817 containerd[1550]: time="2025-01-13T20:34:55.761694305Z" level=error msg="Failed to destroy network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.765211 containerd[1550]: time="2025-01-13T20:34:55.765073900Z" level=error msg="encountered an error cleaning up failed sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.765211 containerd[1550]: time="2025-01-13T20:34:55.765124332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.765916 kubelet[2793]: E0113 20:34:55.765885 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.766372 kubelet[2793]: E0113 20:34:55.765931 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:55.766372 kubelet[2793]: E0113 20:34:55.765948 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:55.766372 kubelet[2793]: E0113 20:34:55.765980 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9m4k5" podUID="b9e9899a-a387-4873-a44f-5621625ff114" Jan 13 20:34:55.767793 containerd[1550]: time="2025-01-13T20:34:55.767614760Z" level=error msg="Failed to destroy network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.768512 containerd[1550]: time="2025-01-13T20:34:55.768495922Z" level=error msg="encountered an error cleaning up failed sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.768602 containerd[1550]: time="2025-01-13T20:34:55.768589273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.771914 kubelet[2793]: E0113 20:34:55.771536 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.771914 kubelet[2793]: E0113 20:34:55.771571 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:55.771914 kubelet[2793]: E0113 20:34:55.771586 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:55.772032 kubelet[2793]: E0113 20:34:55.771611 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" podUID="bf5a440e-479f-4a51-bb48-dec4cff63ae8" Jan 13 20:34:55.785299 containerd[1550]: time="2025-01-13T20:34:55.785264673Z" level=error msg="Failed to destroy network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.786180 containerd[1550]: time="2025-01-13T20:34:55.786161678Z" level=error msg="encountered an error cleaning up failed sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.786225 containerd[1550]: time="2025-01-13T20:34:55.786205016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.786615 kubelet[2793]: E0113 20:34:55.786359 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.786615 kubelet[2793]: E0113 20:34:55.786396 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:55.786615 kubelet[2793]: E0113 20:34:55.786410 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:55.787398 kubelet[2793]: E0113 20:34:55.786442 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" podUID="e8e472de-0de2-4914-879e-86eb6e7cf184" Jan 13 20:34:55.789968 containerd[1550]: time="2025-01-13T20:34:55.789747681Z" level=error msg="Failed to destroy network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.790651 containerd[1550]: time="2025-01-13T20:34:55.790597713Z" level=error msg="encountered an error cleaning up failed sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.790651 containerd[1550]: time="2025-01-13T20:34:55.790635056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.790870 kubelet[2793]: E0113 20:34:55.790736 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.790870 kubelet[2793]: E0113 20:34:55.790783 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:55.790870 kubelet[2793]: E0113 20:34:55.790796 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:55.790951 kubelet[2793]: E0113 20:34:55.790818 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:55.792016 containerd[1550]: time="2025-01-13T20:34:55.791989986Z" level=error msg="Failed to destroy network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.792890 containerd[1550]: time="2025-01-13T20:34:55.792705142Z" level=error msg="encountered an error cleaning up failed sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.792890 containerd[1550]: time="2025-01-13T20:34:55.792738488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.792958 kubelet[2793]: E0113 20:34:55.792828 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.792958 kubelet[2793]: E0113 20:34:55.792852 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:55.792958 kubelet[2793]: E0113 20:34:55.792882 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:55.793035 kubelet[2793]: E0113 20:34:55.792909 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wzl8j" podUID="3c427557-4954-4485-9e6f-15dda2108a10" Jan 13 20:34:55.793578 containerd[1550]: time="2025-01-13T20:34:55.793565196Z" level=error msg="Failed to destroy network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.798803 containerd[1550]: time="2025-01-13T20:34:55.793817877Z" level=error msg="encountered an error cleaning up failed sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.798803 containerd[1550]: time="2025-01-13T20:34:55.793854328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.798901 kubelet[2793]: E0113 20:34:55.794278 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.798901 kubelet[2793]: E0113 20:34:55.794299 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:55.798901 kubelet[2793]: E0113 20:34:55.794308 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:55.799117 kubelet[2793]: E0113 20:34:55.794323 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" podUID="c2088615-f28b-4452-a74d-fc3302061b14" Jan 13 20:34:55.959390 systemd[1]: run-netns-cni\x2df43069bd\x2d691e\x2db730\x2d8496\x2dfe87c129d6c3.mount: Deactivated successfully. Jan 13 20:34:55.959446 systemd[1]: run-netns-cni\x2d297cfe5d\x2debaa\x2d0b7d\x2ddff1\x2d76c2dc05ac23.mount: Deactivated successfully. Jan 13 20:34:56.385428 kubelet[2793]: I0113 20:34:56.385223 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760" Jan 13 20:34:56.386064 containerd[1550]: time="2025-01-13T20:34:56.386042432Z" level=info msg="StopPodSandbox for \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\"" Jan 13 20:34:56.388415 kubelet[2793]: I0113 20:34:56.388404 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d" Jan 13 20:34:56.389293 containerd[1550]: time="2025-01-13T20:34:56.389252531Z" level=info msg="StopPodSandbox for \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\"" Jan 13 20:34:56.389386 containerd[1550]: time="2025-01-13T20:34:56.389371415Z" level=info msg="Ensure that sandbox 4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d in task-service has been cleanup successfully" Jan 13 20:34:56.389773 containerd[1550]: time="2025-01-13T20:34:56.389481339Z" level=info msg="TearDown network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" successfully" Jan 13 20:34:56.389773 containerd[1550]: time="2025-01-13T20:34:56.389491367Z" level=info msg="StopPodSandbox for \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" returns successfully" Jan 13 20:34:56.390918 containerd[1550]: time="2025-01-13T20:34:56.389927237Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\"" Jan 13 20:34:56.390918 containerd[1550]: time="2025-01-13T20:34:56.389964341Z" level=info msg="TearDown network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" successfully" Jan 13 20:34:56.390918 containerd[1550]: time="2025-01-13T20:34:56.389969774Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" returns successfully" Jan 13 20:34:56.390918 containerd[1550]: time="2025-01-13T20:34:56.390136218Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\"" Jan 13 20:34:56.390918 containerd[1550]: time="2025-01-13T20:34:56.390169019Z" level=info msg="TearDown network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" successfully" Jan 13 20:34:56.390918 containerd[1550]: time="2025-01-13T20:34:56.390174546Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" returns successfully" Jan 13 20:34:56.390918 containerd[1550]: time="2025-01-13T20:34:56.390420939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:3,}" Jan 13 20:34:56.391841 systemd[1]: run-netns-cni\x2d597ed748\x2d9718\x2d43ee\x2d9b19\x2db5d2bce4f141.mount: Deactivated successfully. Jan 13 20:34:56.415950 kubelet[2793]: I0113 20:34:56.415937 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3" Jan 13 20:34:56.416413 containerd[1550]: time="2025-01-13T20:34:56.416281173Z" level=info msg="Ensure that sandbox 74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760 in task-service has been cleanup successfully" Jan 13 20:34:56.418131 systemd[1]: run-netns-cni\x2d17f4c79a\x2de8bf\x2d68fe\x2d1691\x2da580d648ec56.mount: Deactivated successfully. Jan 13 20:34:56.421618 systemd[1]: run-netns-cni\x2d3e1a0b30\x2d2090\x2deb29\x2dd199\x2d672cc56765be.mount: Deactivated successfully. Jan 13 20:34:56.424816 kubelet[2793]: I0113 20:34:56.421935 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.418453703Z" level=info msg="StopPodSandbox for \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\"" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.418557984Z" level=info msg="Ensure that sandbox 64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3 in task-service has been cleanup successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.418646940Z" level=info msg="TearDown network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.418654640Z" level=info msg="StopPodSandbox for \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" returns successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.418901199Z" level=info msg="TearDown network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.418909584Z" level=info msg="StopPodSandbox for \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" returns successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.419159027Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\"" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.419209920Z" level=info msg="TearDown network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.419217269Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" returns successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420221246Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\"" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420263211Z" level=info msg="TearDown network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420269412Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" returns successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420466487Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\"" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420503781Z" level=info msg="TearDown network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420510920Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" returns successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420536141Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\"" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420565340Z" level=info msg="TearDown network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420570183Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" returns successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.420847139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:3,}" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.421032132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.422544176Z" level=info msg="StopPodSandbox for \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\"" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.422633819Z" level=info msg="Ensure that sandbox aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3 in task-service has been cleanup successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.422835603Z" level=info msg="TearDown network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.422843840Z" level=info msg="StopPodSandbox for \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" returns successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.423271781Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\"" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.423308019Z" level=info msg="TearDown network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" successfully" Jan 13 20:34:56.424848 containerd[1550]: time="2025-01-13T20:34:56.423313615Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" returns successfully" Jan 13 20:34:56.425476 systemd[1]: run-netns-cni\x2ddd0f8109\x2d79b2\x2d97d0\x2de84c\x2d361da85148c2.mount: Deactivated successfully. Jan 13 20:34:56.433021 containerd[1550]: time="2025-01-13T20:34:56.425587422Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\"" Jan 13 20:34:56.433021 containerd[1550]: time="2025-01-13T20:34:56.425627181Z" level=info msg="TearDown network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" successfully" Jan 13 20:34:56.433021 containerd[1550]: time="2025-01-13T20:34:56.425633483Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" returns successfully" Jan 13 20:34:56.433021 containerd[1550]: time="2025-01-13T20:34:56.426179937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:34:56.433021 containerd[1550]: time="2025-01-13T20:34:56.427658542Z" level=info msg="StopPodSandbox for \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\"" Jan 13 20:34:56.433133 kubelet[2793]: I0113 20:34:56.426418 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810" Jan 13 20:34:56.434564 kubelet[2793]: I0113 20:34:56.434521 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34" Jan 13 20:34:56.435010 containerd[1550]: time="2025-01-13T20:34:56.434975034Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\"" Jan 13 20:34:56.435207 containerd[1550]: time="2025-01-13T20:34:56.435194099Z" level=info msg="Ensure that sandbox 477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34 in task-service has been cleanup successfully" Jan 13 20:34:56.435551 containerd[1550]: time="2025-01-13T20:34:56.435414588Z" level=info msg="TearDown network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" successfully" Jan 13 20:34:56.435551 containerd[1550]: time="2025-01-13T20:34:56.435431713Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" returns successfully" Jan 13 20:34:56.435866 containerd[1550]: time="2025-01-13T20:34:56.435730814Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\"" Jan 13 20:34:56.435866 containerd[1550]: time="2025-01-13T20:34:56.435798213Z" level=info msg="TearDown network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" successfully" Jan 13 20:34:56.435866 containerd[1550]: time="2025-01-13T20:34:56.435805814Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" returns successfully" Jan 13 20:34:56.436341 containerd[1550]: time="2025-01-13T20:34:56.436235205Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" Jan 13 20:34:56.436341 containerd[1550]: time="2025-01-13T20:34:56.436277804Z" level=info msg="TearDown network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" successfully" Jan 13 20:34:56.436341 containerd[1550]: time="2025-01-13T20:34:56.436283682Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" returns successfully" Jan 13 20:34:56.436907 containerd[1550]: time="2025-01-13T20:34:56.436893467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:3,}" Jan 13 20:34:56.439977 containerd[1550]: time="2025-01-13T20:34:56.439959988Z" level=info msg="Ensure that sandbox a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810 in task-service has been cleanup successfully" Jan 13 20:34:56.440304 containerd[1550]: time="2025-01-13T20:34:56.440170649Z" level=info msg="TearDown network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" successfully" Jan 13 20:34:56.440304 containerd[1550]: time="2025-01-13T20:34:56.440182752Z" level=info msg="StopPodSandbox for \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" returns successfully" Jan 13 20:34:56.440445 containerd[1550]: time="2025-01-13T20:34:56.440360826Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\"" Jan 13 20:34:56.440484 containerd[1550]: time="2025-01-13T20:34:56.440462018Z" level=info msg="TearDown network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" successfully" Jan 13 20:34:56.440484 containerd[1550]: time="2025-01-13T20:34:56.440469348Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" returns successfully" Jan 13 20:34:56.440616 containerd[1550]: time="2025-01-13T20:34:56.440604510Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\"" Jan 13 20:34:56.440782 containerd[1550]: time="2025-01-13T20:34:56.440638414Z" level=info msg="TearDown network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" successfully" Jan 13 20:34:56.440782 containerd[1550]: time="2025-01-13T20:34:56.440643550Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" returns successfully" Jan 13 20:34:56.441278 containerd[1550]: time="2025-01-13T20:34:56.441133029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:3,}" Jan 13 20:34:56.956821 systemd[1]: run-netns-cni\x2d2c6de24b\x2d4620\x2d55a3\x2de181\x2db4c354e8dba2.mount: Deactivated successfully. Jan 13 20:34:56.956978 systemd[1]: run-netns-cni\x2d8953a35a\x2d4790\x2de100\x2d9aa6\x2d68f9b38368c7.mount: Deactivated successfully. Jan 13 20:34:57.518160 containerd[1550]: time="2025-01-13T20:34:57.518111003Z" level=error msg="Failed to destroy network for sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.518708 containerd[1550]: time="2025-01-13T20:34:57.518532974Z" level=error msg="encountered an error cleaning up failed sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.518708 containerd[1550]: time="2025-01-13T20:34:57.518570580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.518925 kubelet[2793]: E0113 20:34:57.518678 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.518925 kubelet[2793]: E0113 20:34:57.518717 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:57.518925 kubelet[2793]: E0113 20:34:57.518732 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:57.519093 kubelet[2793]: E0113 20:34:57.518802 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9m4k5" podUID="b9e9899a-a387-4873-a44f-5621625ff114" Jan 13 20:34:57.599172 containerd[1550]: time="2025-01-13T20:34:57.599143151Z" level=error msg="Failed to destroy network for sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.600466 containerd[1550]: time="2025-01-13T20:34:57.599329487Z" level=error msg="encountered an error cleaning up failed sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.600466 containerd[1550]: time="2025-01-13T20:34:57.599372035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.600650 kubelet[2793]: E0113 20:34:57.599499 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.600650 kubelet[2793]: E0113 20:34:57.599533 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:57.600650 kubelet[2793]: E0113 20:34:57.599546 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:57.600717 kubelet[2793]: E0113 20:34:57.599576 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" podUID="e8e472de-0de2-4914-879e-86eb6e7cf184" Jan 13 20:34:57.679267 containerd[1550]: time="2025-01-13T20:34:57.679180699Z" level=error msg="Failed to destroy network for sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.727319 containerd[1550]: time="2025-01-13T20:34:57.679533882Z" level=error msg="encountered an error cleaning up failed sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.727319 containerd[1550]: time="2025-01-13T20:34:57.679570746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.727319 containerd[1550]: time="2025-01-13T20:34:57.681439917Z" level=error msg="Failed to destroy network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.727319 containerd[1550]: time="2025-01-13T20:34:57.682197909Z" level=error msg="encountered an error cleaning up failed sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.727319 containerd[1550]: time="2025-01-13T20:34:57.682228831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.727499 kubelet[2793]: E0113 20:34:57.679823 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.727499 kubelet[2793]: E0113 20:34:57.679861 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:57.727499 kubelet[2793]: E0113 20:34:57.679874 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:57.727587 kubelet[2793]: E0113 20:34:57.680678 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" podUID="bf5a440e-479f-4a51-bb48-dec4cff63ae8" Jan 13 20:34:57.727587 kubelet[2793]: E0113 20:34:57.682843 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.727587 kubelet[2793]: E0113 20:34:57.682862 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:57.727830 kubelet[2793]: E0113 20:34:57.682871 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:57.727830 kubelet[2793]: E0113 20:34:57.682888 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wzl8j" podUID="3c427557-4954-4485-9e6f-15dda2108a10" Jan 13 20:34:57.744582 containerd[1550]: time="2025-01-13T20:34:57.744304831Z" level=error msg="Failed to destroy network for sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.744952 containerd[1550]: time="2025-01-13T20:34:57.744891323Z" level=error msg="encountered an error cleaning up failed sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.744952 containerd[1550]: time="2025-01-13T20:34:57.744925644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.745202 kubelet[2793]: E0113 20:34:57.745126 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.745202 kubelet[2793]: E0113 20:34:57.745159 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:57.745202 kubelet[2793]: E0113 20:34:57.745175 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:57.745292 kubelet[2793]: E0113 20:34:57.745203 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" podUID="c2088615-f28b-4452-a74d-fc3302061b14" Jan 13 20:34:57.827161 containerd[1550]: time="2025-01-13T20:34:57.827135603Z" level=error msg="Failed to destroy network for sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.827407 containerd[1550]: time="2025-01-13T20:34:57.827394662Z" level=error msg="encountered an error cleaning up failed sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.827481 containerd[1550]: time="2025-01-13T20:34:57.827468644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.827886 kubelet[2793]: E0113 20:34:57.827858 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.827926 kubelet[2793]: E0113 20:34:57.827898 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:57.827926 kubelet[2793]: E0113 20:34:57.827911 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:57.827964 kubelet[2793]: E0113 20:34:57.827936 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:57.958327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082-shm.mount: Deactivated successfully. Jan 13 20:34:57.958388 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46-shm.mount: Deactivated successfully. Jan 13 20:34:58.000184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428097685.mount: Deactivated successfully. Jan 13 20:34:58.441088 kubelet[2793]: I0113 20:34:58.441053 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3" Jan 13 20:34:58.445788 containerd[1550]: time="2025-01-13T20:34:58.441603261Z" level=info msg="StopPodSandbox for \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\"" Jan 13 20:34:58.453988 containerd[1550]: time="2025-01-13T20:34:58.453924080Z" level=info msg="Ensure that sandbox 33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3 in task-service has been cleanup successfully" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.454092426Z" level=info msg="TearDown network for sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\" successfully" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.454105161Z" level=info msg="StopPodSandbox for \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\" returns successfully" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456123344Z" level=info msg="StopPodSandbox for \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\"" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456176969Z" level=info msg="TearDown network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" successfully" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456185104Z" level=info msg="StopPodSandbox for \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" returns successfully" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456331356Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\"" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456380435Z" level=info msg="TearDown network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" successfully" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456388570Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" returns successfully" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456507220Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\"" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456552053Z" level=info msg="TearDown network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" successfully" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456559006Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" returns successfully" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.456794930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:34:58.459459 containerd[1550]: time="2025-01-13T20:34:58.458864541Z" level=info msg="StopPodSandbox for \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\"" Jan 13 20:34:58.459746 kubelet[2793]: I0113 20:34:58.457844 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082" Jan 13 20:34:58.455781 systemd[1]: run-netns-cni\x2dc8e3c9f0\x2d85d0\x2d2b8c\x2d1681\x2d1feaf963252b.mount: Deactivated successfully. Jan 13 20:34:58.492594 containerd[1550]: time="2025-01-13T20:34:58.492506166Z" level=info msg="Ensure that sandbox b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082 in task-service has been cleanup successfully" Jan 13 20:34:58.494221 systemd[1]: run-netns-cni\x2d41477156\x2d7e75\x2ddf56\x2d65f2\x2d99e4e9a468a5.mount: Deactivated successfully. Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.494417180Z" level=info msg="TearDown network for sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\" successfully" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.494430720Z" level=info msg="StopPodSandbox for \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\" returns successfully" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.494819798Z" level=info msg="StopPodSandbox for \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\"" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.494868198Z" level=info msg="TearDown network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" successfully" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.494876126Z" level=info msg="StopPodSandbox for \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" returns successfully" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.495074298Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\"" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.495175810Z" level=info msg="TearDown network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" successfully" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.495184537Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" returns successfully" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.495735805Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\"" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.495803115Z" level=info msg="TearDown network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" successfully" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.495812909Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" returns successfully" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.496495861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:4,}" Jan 13 20:34:58.501563 containerd[1550]: time="2025-01-13T20:34:58.496962499Z" level=info msg="StopPodSandbox for \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\"" Jan 13 20:34:58.501970 kubelet[2793]: I0113 20:34:58.496340 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46" Jan 13 20:34:58.514989 containerd[1550]: time="2025-01-13T20:34:58.514937063Z" level=info msg="Ensure that sandbox a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46 in task-service has been cleanup successfully" Jan 13 20:34:58.516627 containerd[1550]: time="2025-01-13T20:34:58.515109308Z" level=info msg="TearDown network for sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\" successfully" Jan 13 20:34:58.516627 containerd[1550]: time="2025-01-13T20:34:58.515123672Z" level=info msg="StopPodSandbox for \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\" returns successfully" Jan 13 20:34:58.516651 systemd[1]: run-netns-cni\x2dfeee58b5\x2d85da\x2dc7e2\x2dd9d0\x2d1997f5c8ec23.mount: Deactivated successfully. Jan 13 20:34:58.517271 containerd[1550]: time="2025-01-13T20:34:58.517080237Z" level=info msg="StopPodSandbox for \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\"" Jan 13 20:34:58.517271 containerd[1550]: time="2025-01-13T20:34:58.517193514Z" level=info msg="TearDown network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" successfully" Jan 13 20:34:58.517271 containerd[1550]: time="2025-01-13T20:34:58.517203528Z" level=info msg="StopPodSandbox for \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" returns successfully" Jan 13 20:34:58.517727 containerd[1550]: time="2025-01-13T20:34:58.517707619Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\"" Jan 13 20:34:58.517790 containerd[1550]: time="2025-01-13T20:34:58.517773889Z" level=info msg="TearDown network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" successfully" Jan 13 20:34:58.517790 containerd[1550]: time="2025-01-13T20:34:58.517786272Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" returns successfully" Jan 13 20:34:58.518073 containerd[1550]: time="2025-01-13T20:34:58.518053726Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\"" Jan 13 20:34:58.518119 containerd[1550]: time="2025-01-13T20:34:58.518100525Z" level=info msg="TearDown network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" successfully" Jan 13 20:34:58.518119 containerd[1550]: time="2025-01-13T20:34:58.518108027Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" returns successfully" Jan 13 20:34:58.518873 containerd[1550]: time="2025-01-13T20:34:58.518856174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:4,}" Jan 13 20:34:58.519210 kubelet[2793]: I0113 20:34:58.519191 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd" Jan 13 20:34:58.523413 containerd[1550]: time="2025-01-13T20:34:58.523175794Z" level=info msg="StopPodSandbox for \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\"" Jan 13 20:34:58.523413 containerd[1550]: time="2025-01-13T20:34:58.523320814Z" level=info msg="Ensure that sandbox 7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd in task-service has been cleanup successfully" Jan 13 20:34:58.523678 containerd[1550]: time="2025-01-13T20:34:58.523543961Z" level=info msg="TearDown network for sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\" successfully" Jan 13 20:34:58.523678 containerd[1550]: time="2025-01-13T20:34:58.523556284Z" level=info msg="StopPodSandbox for \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\" returns successfully" Jan 13 20:34:58.524076 containerd[1550]: time="2025-01-13T20:34:58.523985450Z" level=info msg="StopPodSandbox for \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\"" Jan 13 20:34:58.524076 containerd[1550]: time="2025-01-13T20:34:58.524035358Z" level=info msg="TearDown network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" successfully" Jan 13 20:34:58.524076 containerd[1550]: time="2025-01-13T20:34:58.524042830Z" level=info msg="StopPodSandbox for \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" returns successfully" Jan 13 20:34:58.524489 containerd[1550]: time="2025-01-13T20:34:58.524404218Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\"" Jan 13 20:34:58.524489 containerd[1550]: time="2025-01-13T20:34:58.524450220Z" level=info msg="TearDown network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" successfully" Jan 13 20:34:58.524489 containerd[1550]: time="2025-01-13T20:34:58.524458762Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" returns successfully" Jan 13 20:34:58.524834 containerd[1550]: time="2025-01-13T20:34:58.524663139Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\"" Jan 13 20:34:58.524834 containerd[1550]: time="2025-01-13T20:34:58.524706668Z" level=info msg="TearDown network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" successfully" Jan 13 20:34:58.524834 containerd[1550]: time="2025-01-13T20:34:58.524713749Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" returns successfully" Jan 13 20:34:58.525238 containerd[1550]: time="2025-01-13T20:34:58.525094583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:34:58.525532 kubelet[2793]: I0113 20:34:58.525511 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24" Jan 13 20:34:58.526783 containerd[1550]: time="2025-01-13T20:34:58.526571339Z" level=info msg="StopPodSandbox for \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\"" Jan 13 20:34:58.526783 containerd[1550]: time="2025-01-13T20:34:58.526683678Z" level=info msg="Ensure that sandbox 7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24 in task-service has been cleanup successfully" Jan 13 20:34:58.526912 containerd[1550]: time="2025-01-13T20:34:58.526901818Z" level=info msg="TearDown network for sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\" successfully" Jan 13 20:34:58.527426 containerd[1550]: time="2025-01-13T20:34:58.526941766Z" level=info msg="StopPodSandbox for \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\" returns successfully" Jan 13 20:34:58.527426 containerd[1550]: time="2025-01-13T20:34:58.527142906Z" level=info msg="StopPodSandbox for \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\"" Jan 13 20:34:58.527426 containerd[1550]: time="2025-01-13T20:34:58.527193878Z" level=info msg="TearDown network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" successfully" Jan 13 20:34:58.527426 containerd[1550]: time="2025-01-13T20:34:58.527200705Z" level=info msg="StopPodSandbox for \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" returns successfully" Jan 13 20:34:58.527674 containerd[1550]: time="2025-01-13T20:34:58.527661954Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\"" Jan 13 20:34:58.528252 containerd[1550]: time="2025-01-13T20:34:58.527740948Z" level=info msg="TearDown network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" successfully" Jan 13 20:34:58.528252 containerd[1550]: time="2025-01-13T20:34:58.528083857Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" returns successfully" Jan 13 20:34:58.551064 containerd[1550]: time="2025-01-13T20:34:58.551044235Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\"" Jan 13 20:34:58.551214 containerd[1550]: time="2025-01-13T20:34:58.551173255Z" level=info msg="TearDown network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" successfully" Jan 13 20:34:58.551214 containerd[1550]: time="2025-01-13T20:34:58.551182256Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" returns successfully" Jan 13 20:34:58.551843 kubelet[2793]: I0113 20:34:58.551826 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68" Jan 13 20:34:58.552080 containerd[1550]: time="2025-01-13T20:34:58.551939732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:4,}" Jan 13 20:34:58.552303 containerd[1550]: time="2025-01-13T20:34:58.552292417Z" level=info msg="StopPodSandbox for \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\"" Jan 13 20:34:58.552469 containerd[1550]: time="2025-01-13T20:34:58.552445330Z" level=info msg="Ensure that sandbox 6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68 in task-service has been cleanup successfully" Jan 13 20:34:58.552665 containerd[1550]: time="2025-01-13T20:34:58.552653803Z" level=info msg="TearDown network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" successfully" Jan 13 20:34:58.552704 containerd[1550]: time="2025-01-13T20:34:58.552696594Z" level=info msg="StopPodSandbox for \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" returns successfully" Jan 13 20:34:58.553061 containerd[1550]: time="2025-01-13T20:34:58.552983069Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\"" Jan 13 20:34:58.553061 containerd[1550]: time="2025-01-13T20:34:58.553023362Z" level=info msg="TearDown network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" successfully" Jan 13 20:34:58.553061 containerd[1550]: time="2025-01-13T20:34:58.553029405Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" returns successfully" Jan 13 20:34:58.553298 containerd[1550]: time="2025-01-13T20:34:58.553226241Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\"" Jan 13 20:34:58.553298 containerd[1550]: time="2025-01-13T20:34:58.553265543Z" level=info msg="TearDown network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" successfully" Jan 13 20:34:58.553298 containerd[1550]: time="2025-01-13T20:34:58.553271599Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" returns successfully" Jan 13 20:34:58.553464 containerd[1550]: time="2025-01-13T20:34:58.553397442Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" Jan 13 20:34:58.553464 containerd[1550]: time="2025-01-13T20:34:58.553432915Z" level=info msg="TearDown network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" successfully" Jan 13 20:34:58.553464 containerd[1550]: time="2025-01-13T20:34:58.553438998Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" returns successfully" Jan 13 20:34:58.553776 containerd[1550]: time="2025-01-13T20:34:58.553658029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:4,}" Jan 13 20:34:58.734856 containerd[1550]: time="2025-01-13T20:34:58.734793322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:58.763929 containerd[1550]: time="2025-01-13T20:34:58.763463839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:34:58.767651 containerd[1550]: time="2025-01-13T20:34:58.767556503Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:58.781157 containerd[1550]: time="2025-01-13T20:34:58.779642090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:58.808984 containerd[1550]: time="2025-01-13T20:34:58.808891940Z" level=error msg="Failed to destroy network for sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.809981 containerd[1550]: time="2025-01-13T20:34:58.809961372Z" level=error msg="encountered an error cleaning up failed sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.810084 containerd[1550]: time="2025-01-13T20:34:58.810071970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.810410 kubelet[2793]: E0113 20:34:58.810283 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.810457 kubelet[2793]: E0113 20:34:58.810426 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:58.810457 kubelet[2793]: E0113 20:34:58.810442 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" Jan 13 20:34:58.810628 kubelet[2793]: E0113 20:34:58.810565 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-whhr8_calico-apiserver(bf5a440e-479f-4a51-bb48-dec4cff63ae8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" podUID="bf5a440e-479f-4a51-bb48-dec4cff63ae8" Jan 13 20:34:58.814849 containerd[1550]: time="2025-01-13T20:34:58.814824227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.46184668s" Jan 13 20:34:58.814961 containerd[1550]: time="2025-01-13T20:34:58.814949352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:34:58.837363 containerd[1550]: time="2025-01-13T20:34:58.837282569Z" level=error msg="Failed to destroy network for sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.837677 containerd[1550]: time="2025-01-13T20:34:58.837588435Z" level=error msg="encountered an error cleaning up failed sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.837677 containerd[1550]: time="2025-01-13T20:34:58.837624627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.837790 kubelet[2793]: E0113 20:34:58.837765 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.837833 kubelet[2793]: E0113 20:34:58.837805 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:58.837833 kubelet[2793]: E0113 20:34:58.837819 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" Jan 13 20:34:58.837870 kubelet[2793]: E0113 20:34:58.837843 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-974644f4b-zs7h8_calico-system(e8e472de-0de2-4914-879e-86eb6e7cf184)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" podUID="e8e472de-0de2-4914-879e-86eb6e7cf184" Jan 13 20:34:58.871378 containerd[1550]: time="2025-01-13T20:34:58.871066534Z" level=error msg="Failed to destroy network for sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.871378 containerd[1550]: time="2025-01-13T20:34:58.871282965Z" level=error msg="encountered an error cleaning up failed sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.871378 containerd[1550]: time="2025-01-13T20:34:58.871319539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.871506 kubelet[2793]: E0113 20:34:58.871458 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.871506 kubelet[2793]: E0113 20:34:58.871496 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:58.871725 kubelet[2793]: E0113 20:34:58.871515 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:34:58.871782 kubelet[2793]: E0113 20:34:58.871743 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wzl8j" podUID="3c427557-4954-4485-9e6f-15dda2108a10" Jan 13 20:34:58.879193 containerd[1550]: time="2025-01-13T20:34:58.879118481Z" level=error msg="Failed to destroy network for sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.880297 containerd[1550]: time="2025-01-13T20:34:58.880176480Z" level=error msg="encountered an error cleaning up failed sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.880380 containerd[1550]: time="2025-01-13T20:34:58.880367668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.880649 kubelet[2793]: E0113 20:34:58.880623 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.880689 kubelet[2793]: E0113 20:34:58.880657 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:58.880689 kubelet[2793]: E0113 20:34:58.880672 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9m4k5" Jan 13 20:34:58.880732 kubelet[2793]: E0113 20:34:58.880695 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9m4k5_kube-system(b9e9899a-a387-4873-a44f-5621625ff114)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9m4k5" podUID="b9e9899a-a387-4873-a44f-5621625ff114" Jan 13 20:34:58.888649 containerd[1550]: time="2025-01-13T20:34:58.888545662Z" level=info msg="CreateContainer within sandbox \"8d5b75b11619e951fee45353b5533f356a7e5db64338aa368d470ba165c40682\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:34:58.891574 containerd[1550]: time="2025-01-13T20:34:58.891550710Z" level=error msg="Failed to destroy network for sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.892026 containerd[1550]: time="2025-01-13T20:34:58.891847402Z" level=error msg="encountered an error cleaning up failed sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.892026 containerd[1550]: time="2025-01-13T20:34:58.891881906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.893568 kubelet[2793]: E0113 20:34:58.892184 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.893568 kubelet[2793]: E0113 20:34:58.892228 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:58.893568 kubelet[2793]: E0113 20:34:58.892241 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" Jan 13 20:34:58.894654 kubelet[2793]: E0113 20:34:58.892269 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6bf69758-q2k5c_calico-apiserver(c2088615-f28b-4452-a74d-fc3302061b14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" podUID="c2088615-f28b-4452-a74d-fc3302061b14" Jan 13 20:34:58.895501 containerd[1550]: time="2025-01-13T20:34:58.895475986Z" level=error msg="Failed to destroy network for sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.895747 containerd[1550]: time="2025-01-13T20:34:58.895735260Z" level=error msg="encountered an error cleaning up failed sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.895876 containerd[1550]: time="2025-01-13T20:34:58.895863919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.896043 kubelet[2793]: E0113 20:34:58.896027 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.896111 kubelet[2793]: E0113 20:34:58.896102 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:58.896163 kubelet[2793]: E0113 20:34:58.896145 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9s7kx" Jan 13 20:34:58.896228 kubelet[2793]: E0113 20:34:58.896207 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9s7kx_calico-system(33616460-2f9b-481c-b406-8a3838ed8c9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9s7kx" podUID="33616460-2f9b-481c-b406-8a3838ed8c9e" Jan 13 20:34:58.921844 containerd[1550]: time="2025-01-13T20:34:58.921818954Z" level=info msg="CreateContainer within sandbox \"8d5b75b11619e951fee45353b5533f356a7e5db64338aa368d470ba165c40682\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1af23b7670b3472542c7f3f0e8e95c02298eb1fdb4f6e18797fa59c73505a69d\"" Jan 13 20:34:58.925299 containerd[1550]: time="2025-01-13T20:34:58.925274596Z" level=info msg="StartContainer for \"1af23b7670b3472542c7f3f0e8e95c02298eb1fdb4f6e18797fa59c73505a69d\"" Jan 13 20:34:58.959847 systemd[1]: run-netns-cni\x2df93835f1\x2d638d\x2d0ff2\x2dfa1e\x2d445c96e99e32.mount: Deactivated successfully. Jan 13 20:34:58.959903 systemd[1]: run-netns-cni\x2d9e80a965\x2d7ea0\x2dd2df\x2dab69\x2d83bde37b1ef2.mount: Deactivated successfully. Jan 13 20:34:58.959938 systemd[1]: run-netns-cni\x2dd29156a0\x2d77c1\x2d3274\x2d7aae\x2d226a79d77696.mount: Deactivated successfully. Jan 13 20:34:59.032923 systemd[1]: Started cri-containerd-1af23b7670b3472542c7f3f0e8e95c02298eb1fdb4f6e18797fa59c73505a69d.scope - libcontainer container 1af23b7670b3472542c7f3f0e8e95c02298eb1fdb4f6e18797fa59c73505a69d. Jan 13 20:34:59.057453 containerd[1550]: time="2025-01-13T20:34:59.057384708Z" level=info msg="StartContainer for \"1af23b7670b3472542c7f3f0e8e95c02298eb1fdb4f6e18797fa59c73505a69d\" returns successfully" Jan 13 20:34:59.295674 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:34:59.302325 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:34:59.608218 kubelet[2793]: I0113 20:34:59.608194 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d" Jan 13 20:34:59.608700 containerd[1550]: time="2025-01-13T20:34:59.608667376Z" level=info msg="StopPodSandbox for \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\"" Jan 13 20:34:59.611491 containerd[1550]: time="2025-01-13T20:34:59.608828913Z" level=info msg="Ensure that sandbox d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d in task-service has been cleanup successfully" Jan 13 20:34:59.611491 containerd[1550]: time="2025-01-13T20:34:59.610926440Z" level=info msg="TearDown network for sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\" successfully" Jan 13 20:34:59.611491 containerd[1550]: time="2025-01-13T20:34:59.610940880Z" level=info msg="StopPodSandbox for \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\" returns successfully" Jan 13 20:34:59.610666 systemd[1]: run-netns-cni\x2db23895b0\x2df747\x2d9195\x2d6dd2\x2d369766b9e9f5.mount: Deactivated successfully. Jan 13 20:34:59.612399 containerd[1550]: time="2025-01-13T20:34:59.612377023Z" level=info msg="StopPodSandbox for \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\"" Jan 13 20:34:59.612447 containerd[1550]: time="2025-01-13T20:34:59.612434562Z" level=info msg="TearDown network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" successfully" Jan 13 20:34:59.612447 containerd[1550]: time="2025-01-13T20:34:59.612445495Z" level=info msg="StopPodSandbox for \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" returns successfully" Jan 13 20:34:59.613129 containerd[1550]: time="2025-01-13T20:34:59.612709399Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\"" Jan 13 20:34:59.613186 containerd[1550]: time="2025-01-13T20:34:59.613169647Z" level=info msg="TearDown network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" successfully" Jan 13 20:34:59.613339 containerd[1550]: time="2025-01-13T20:34:59.613323730Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" returns successfully" Jan 13 20:34:59.613642 containerd[1550]: time="2025-01-13T20:34:59.613623343Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\"" Jan 13 20:34:59.613717 containerd[1550]: time="2025-01-13T20:34:59.613679962Z" level=info msg="TearDown network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" successfully" Jan 13 20:34:59.613717 containerd[1550]: time="2025-01-13T20:34:59.613693258Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" returns successfully" Jan 13 20:34:59.613986 containerd[1550]: time="2025-01-13T20:34:59.613934480Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" Jan 13 20:34:59.614077 containerd[1550]: time="2025-01-13T20:34:59.613993088Z" level=info msg="TearDown network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" successfully" Jan 13 20:34:59.614077 containerd[1550]: time="2025-01-13T20:34:59.614001915Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" returns successfully" Jan 13 20:34:59.614789 containerd[1550]: time="2025-01-13T20:34:59.614324149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:5,}" Jan 13 20:34:59.659965 kubelet[2793]: I0113 20:34:59.659258 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9" Jan 13 20:34:59.660692 containerd[1550]: time="2025-01-13T20:34:59.660245552Z" level=info msg="StopPodSandbox for \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\"" Jan 13 20:34:59.660692 containerd[1550]: time="2025-01-13T20:34:59.660413801Z" level=info msg="Ensure that sandbox 3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9 in task-service has been cleanup successfully" Jan 13 20:34:59.661146 containerd[1550]: time="2025-01-13T20:34:59.661132885Z" level=info msg="TearDown network for sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\" successfully" Jan 13 20:34:59.661146 containerd[1550]: time="2025-01-13T20:34:59.661143524Z" level=info msg="StopPodSandbox for \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\" returns successfully" Jan 13 20:34:59.661368 containerd[1550]: time="2025-01-13T20:34:59.661286888Z" level=info msg="StopPodSandbox for \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\"" Jan 13 20:34:59.662153 containerd[1550]: time="2025-01-13T20:34:59.661411482Z" level=info msg="TearDown network for sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\" successfully" Jan 13 20:34:59.662153 containerd[1550]: time="2025-01-13T20:34:59.661419489Z" level=info msg="StopPodSandbox for \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\" returns successfully" Jan 13 20:34:59.662153 containerd[1550]: time="2025-01-13T20:34:59.662034498Z" level=info msg="StopPodSandbox for \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\"" Jan 13 20:34:59.662153 containerd[1550]: time="2025-01-13T20:34:59.662086493Z" level=info msg="TearDown network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" successfully" Jan 13 20:34:59.662153 containerd[1550]: time="2025-01-13T20:34:59.662100041Z" level=info msg="StopPodSandbox for \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" returns successfully" Jan 13 20:34:59.663249 containerd[1550]: time="2025-01-13T20:34:59.662801636Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\"" Jan 13 20:34:59.663440 containerd[1550]: time="2025-01-13T20:34:59.663343084Z" level=info msg="TearDown network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" successfully" Jan 13 20:34:59.663440 containerd[1550]: time="2025-01-13T20:34:59.663354492Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" returns successfully" Jan 13 20:34:59.664833 containerd[1550]: time="2025-01-13T20:34:59.664819442Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\"" Jan 13 20:34:59.666769 containerd[1550]: time="2025-01-13T20:34:59.665185346Z" level=info msg="TearDown network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" successfully" Jan 13 20:34:59.666769 containerd[1550]: time="2025-01-13T20:34:59.665198418Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" returns successfully" Jan 13 20:34:59.666845 kubelet[2793]: I0113 20:34:59.665783 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886" Jan 13 20:34:59.667058 containerd[1550]: time="2025-01-13T20:34:59.667039712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:5,}" Jan 13 20:34:59.667489 containerd[1550]: time="2025-01-13T20:34:59.667323948Z" level=info msg="StopPodSandbox for \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\"" Jan 13 20:34:59.667489 containerd[1550]: time="2025-01-13T20:34:59.667426315Z" level=info msg="Ensure that sandbox d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886 in task-service has been cleanup successfully" Jan 13 20:34:59.668335 containerd[1550]: time="2025-01-13T20:34:59.668314097Z" level=info msg="TearDown network for sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\" successfully" Jan 13 20:34:59.668335 containerd[1550]: time="2025-01-13T20:34:59.668328786Z" level=info msg="StopPodSandbox for \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\" returns successfully" Jan 13 20:34:59.669487 containerd[1550]: time="2025-01-13T20:34:59.668712284Z" level=info msg="StopPodSandbox for \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\"" Jan 13 20:34:59.669487 containerd[1550]: time="2025-01-13T20:34:59.668753647Z" level=info msg="TearDown network for sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\" successfully" Jan 13 20:34:59.669942 containerd[1550]: time="2025-01-13T20:34:59.668768016Z" level=info msg="StopPodSandbox for \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\" returns successfully" Jan 13 20:34:59.672531 containerd[1550]: time="2025-01-13T20:34:59.672508593Z" level=info msg="StopPodSandbox for \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\"" Jan 13 20:34:59.672629 containerd[1550]: time="2025-01-13T20:34:59.672563882Z" level=info msg="TearDown network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" successfully" Jan 13 20:34:59.672629 containerd[1550]: time="2025-01-13T20:34:59.672571774Z" level=info msg="StopPodSandbox for \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" returns successfully" Jan 13 20:34:59.674269 containerd[1550]: time="2025-01-13T20:34:59.673882368Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\"" Jan 13 20:34:59.674339 kubelet[2793]: I0113 20:34:59.674307 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d" Jan 13 20:34:59.674656 containerd[1550]: time="2025-01-13T20:34:59.674639212Z" level=info msg="StopPodSandbox for \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\"" Jan 13 20:34:59.674711 containerd[1550]: time="2025-01-13T20:34:59.674697205Z" level=info msg="TearDown network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" successfully" Jan 13 20:34:59.674749 containerd[1550]: time="2025-01-13T20:34:59.674742488Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" returns successfully" Jan 13 20:34:59.674847 containerd[1550]: time="2025-01-13T20:34:59.674752555Z" level=info msg="Ensure that sandbox 5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d in task-service has been cleanup successfully" Jan 13 20:34:59.674989 containerd[1550]: time="2025-01-13T20:34:59.674980138Z" level=info msg="TearDown network for sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\" successfully" Jan 13 20:34:59.675040 containerd[1550]: time="2025-01-13T20:34:59.675019773Z" level=info msg="StopPodSandbox for \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\" returns successfully" Jan 13 20:34:59.675804 containerd[1550]: time="2025-01-13T20:34:59.675458881Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\"" Jan 13 20:34:59.675995 containerd[1550]: time="2025-01-13T20:34:59.675952775Z" level=info msg="TearDown network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" successfully" Jan 13 20:34:59.675995 containerd[1550]: time="2025-01-13T20:34:59.675962671Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" returns successfully" Jan 13 20:34:59.676667 containerd[1550]: time="2025-01-13T20:34:59.676276728Z" level=info msg="StopPodSandbox for \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\"" Jan 13 20:34:59.676667 containerd[1550]: time="2025-01-13T20:34:59.676345952Z" level=info msg="TearDown network for sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\" successfully" Jan 13 20:34:59.676667 containerd[1550]: time="2025-01-13T20:34:59.676353255Z" level=info msg="StopPodSandbox for \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\" returns successfully" Jan 13 20:34:59.676667 containerd[1550]: time="2025-01-13T20:34:59.676517608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:34:59.678577 containerd[1550]: time="2025-01-13T20:34:59.678564156Z" level=info msg="StopPodSandbox for \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\"" Jan 13 20:34:59.678730 containerd[1550]: time="2025-01-13T20:34:59.678673172Z" level=info msg="TearDown network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" successfully" Jan 13 20:34:59.678922 containerd[1550]: time="2025-01-13T20:34:59.678913836Z" level=info msg="StopPodSandbox for \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" returns successfully" Jan 13 20:34:59.679122 containerd[1550]: time="2025-01-13T20:34:59.679112965Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\"" Jan 13 20:34:59.680838 containerd[1550]: time="2025-01-13T20:34:59.680824729Z" level=info msg="TearDown network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" successfully" Jan 13 20:34:59.680907 containerd[1550]: time="2025-01-13T20:34:59.680898084Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" returns successfully" Jan 13 20:34:59.681187 kubelet[2793]: I0113 20:34:59.681169 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32" Jan 13 20:34:59.681645 containerd[1550]: time="2025-01-13T20:34:59.681462625Z" level=info msg="StopPodSandbox for \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\"" Jan 13 20:34:59.681645 containerd[1550]: time="2025-01-13T20:34:59.681572505Z" level=info msg="Ensure that sandbox e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32 in task-service has been cleanup successfully" Jan 13 20:34:59.681744 containerd[1550]: time="2025-01-13T20:34:59.681735244Z" level=info msg="TearDown network for sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\" successfully" Jan 13 20:34:59.681822 containerd[1550]: time="2025-01-13T20:34:59.681809147Z" level=info msg="StopPodSandbox for \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\" returns successfully" Jan 13 20:34:59.681912 containerd[1550]: time="2025-01-13T20:34:59.681903010Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\"" Jan 13 20:34:59.682041 containerd[1550]: time="2025-01-13T20:34:59.682032718Z" level=info msg="TearDown network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" successfully" Jan 13 20:34:59.682099 containerd[1550]: time="2025-01-13T20:34:59.682081047Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" returns successfully" Jan 13 20:34:59.683086 containerd[1550]: time="2025-01-13T20:34:59.682886172Z" level=info msg="StopPodSandbox for \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\"" Jan 13 20:34:59.683086 containerd[1550]: time="2025-01-13T20:34:59.682925184Z" level=info msg="TearDown network for sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\" successfully" Jan 13 20:34:59.683086 containerd[1550]: time="2025-01-13T20:34:59.682956082Z" level=info msg="StopPodSandbox for \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\" returns successfully" Jan 13 20:34:59.683182 containerd[1550]: time="2025-01-13T20:34:59.683172722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:34:59.684488 containerd[1550]: time="2025-01-13T20:34:59.684476705Z" level=info msg="StopPodSandbox for \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\"" Jan 13 20:34:59.685061 containerd[1550]: time="2025-01-13T20:34:59.685020103Z" level=info msg="TearDown network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" successfully" Jan 13 20:34:59.685403 kubelet[2793]: I0113 20:34:59.685209 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.685448095Z" level=info msg="StopPodSandbox for \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" returns successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.685514609Z" level=info msg="StopPodSandbox for \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\"" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.685606262Z" level=info msg="Ensure that sandbox aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609 in task-service has been cleanup successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.685850500Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\"" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.685899034Z" level=info msg="TearDown network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.685905376Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" returns successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.685962975Z" level=info msg="TearDown network for sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\" successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.685969776Z" level=info msg="StopPodSandbox for \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\" returns successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.686125839Z" level=info msg="StopPodSandbox for \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\"" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.686183054Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\"" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.686221896Z" level=info msg="TearDown network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.686238681Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" returns successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.686444508Z" level=info msg="TearDown network for sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\" successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.686452689Z" level=info msg="StopPodSandbox for \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\" returns successfully" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.686593154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:5,}" Jan 13 20:34:59.687545 containerd[1550]: time="2025-01-13T20:34:59.687059211Z" level=info msg="StopPodSandbox for \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\"" Jan 13 20:34:59.688283 containerd[1550]: time="2025-01-13T20:34:59.687096029Z" level=info msg="TearDown network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" successfully" Jan 13 20:34:59.688283 containerd[1550]: time="2025-01-13T20:34:59.688157906Z" level=info msg="StopPodSandbox for \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" returns successfully" Jan 13 20:34:59.688692 containerd[1550]: time="2025-01-13T20:34:59.688407695Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\"" Jan 13 20:34:59.688692 containerd[1550]: time="2025-01-13T20:34:59.688457073Z" level=info msg="TearDown network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" successfully" Jan 13 20:34:59.688692 containerd[1550]: time="2025-01-13T20:34:59.688463276Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" returns successfully" Jan 13 20:34:59.688692 containerd[1550]: time="2025-01-13T20:34:59.688617006Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\"" Jan 13 20:34:59.688692 containerd[1550]: time="2025-01-13T20:34:59.688661526Z" level=info msg="TearDown network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" successfully" Jan 13 20:34:59.688692 containerd[1550]: time="2025-01-13T20:34:59.688668817Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" returns successfully" Jan 13 20:34:59.689184 containerd[1550]: time="2025-01-13T20:34:59.689170422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:5,}" Jan 13 20:34:59.763079 kubelet[2793]: I0113 20:34:59.749530 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9pxd2" podStartSLOduration=1.681090209 podStartE2EDuration="15.693074051s" podCreationTimestamp="2025-01-13 20:34:44 +0000 UTC" firstStartedPulling="2025-01-13 20:34:44.81051223 +0000 UTC m=+12.693912072" lastFinishedPulling="2025-01-13 20:34:58.822496072 +0000 UTC m=+26.705895914" observedRunningTime="2025-01-13 20:34:59.672009165 +0000 UTC m=+27.555409015" watchObservedRunningTime="2025-01-13 20:34:59.693074051 +0000 UTC m=+27.576473900" Jan 13 20:34:59.962270 systemd[1]: run-netns-cni\x2d74746ac1\x2d0693\x2d187a\x2de50a\x2dbd35216c5e40.mount: Deactivated successfully. Jan 13 20:34:59.962822 systemd[1]: run-netns-cni\x2daa3c1ca1\x2d8a8c\x2d2604\x2d93fe\x2db6076cf329b9.mount: Deactivated successfully. Jan 13 20:34:59.963094 systemd[1]: run-netns-cni\x2d8a19ac82\x2da6b1\x2d8ae4\x2df5cc\x2d0dfe43c1a420.mount: Deactivated successfully. Jan 13 20:34:59.963255 systemd[1]: run-netns-cni\x2dd92f372f\x2db0db\x2df866\x2d8bb4\x2ddfafdae6e620.mount: Deactivated successfully. Jan 13 20:34:59.963521 systemd[1]: run-netns-cni\x2dc5b02798\x2d55cb\x2d6f20\x2d820b\x2dc47fff659d1e.mount: Deactivated successfully. Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:34:59.828 [INFO][4503] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:34:59.828 [INFO][4503] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" iface="eth0" netns="/var/run/netns/cni-fad3b8cc-c138-1a34-2ae7-5e8a0ac33499" Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:34:59.828 [INFO][4503] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" iface="eth0" netns="/var/run/netns/cni-fad3b8cc-c138-1a34-2ae7-5e8a0ac33499" Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:34:59.832 [INFO][4503] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" iface="eth0" netns="/var/run/netns/cni-fad3b8cc-c138-1a34-2ae7-5e8a0ac33499" Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:34:59.832 [INFO][4503] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:34:59.832 [INFO][4503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:35:00.146 [INFO][4595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" HandleID="k8s-pod-network.1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" Workload="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:35:00.149 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:35:00.150 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:35:00.162 [WARNING][4595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" HandleID="k8s-pod-network.1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" Workload="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:35:00.163 [INFO][4595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" HandleID="k8s-pod-network.1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" Workload="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:35:00.165 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:00.167557 containerd[1550]: 2025-01-13 20:35:00.166 [INFO][4503] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39" Jan 13 20:35:00.170664 systemd[1]: run-netns-cni\x2dfad3b8cc\x2dc138\x2d1a34\x2d2ae7\x2d5e8a0ac33499.mount: Deactivated successfully. Jan 13 20:35:00.170723 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39-shm.mount: Deactivated successfully. Jan 13 20:35:00.171162 containerd[1550]: time="2025-01-13T20:35:00.171000954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.171505 kubelet[2793]: E0113 20:35:00.171312 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.171505 kubelet[2793]: E0113 20:35:00.171373 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:35:00.171505 kubelet[2793]: E0113 20:35:00.171389 2793 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wzl8j" Jan 13 20:35:00.171604 kubelet[2793]: E0113 20:35:00.171478 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wzl8j_kube-system(3c427557-4954-4485-9e6f-15dda2108a10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d42d8da7e0f80d1c1bbe5a23888311c0916f000b6f55d4b9cfdb4e5e2311a39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wzl8j" podUID="3c427557-4954-4485-9e6f-15dda2108a10" Jan 13 20:35:00.285073 systemd-networkd[1451]: cali35674fe6735: Link UP Jan 13 20:35:00.285176 systemd-networkd[1451]: cali35674fe6735: Gained carrier Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:34:59.862 [INFO][4563] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:34:59.872 [INFO][4563] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0 coredns-6f6b679f8f- kube-system b9e9899a-a387-4873-a44f-5621625ff114 657 0 2025-01-13 20:34:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-9m4k5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35674fe6735 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Namespace="kube-system" Pod="coredns-6f6b679f8f-9m4k5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9m4k5-" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:34:59.872 [INFO][4563] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Namespace="kube-system" Pod="coredns-6f6b679f8f-9m4k5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.147 [INFO][4607] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" HandleID="k8s-pod-network.48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Workload="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.165 [INFO][4607] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" HandleID="k8s-pod-network.48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Workload="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000440280), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-9m4k5", "timestamp":"2025-01-13 20:35:00.147145517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.165 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.165 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.165 [INFO][4607] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.168 [INFO][4607] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" host="localhost" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.261 [INFO][4607] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.264 [INFO][4607] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.265 [INFO][4607] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.267 [INFO][4607] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.267 [INFO][4607] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" host="localhost" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.267 [INFO][4607] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9 Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.270 [INFO][4607] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" host="localhost" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.273 [INFO][4607] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" host="localhost" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.273 [INFO][4607] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" host="localhost" Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.273 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:00.294002 containerd[1550]: 2025-01-13 20:35:00.273 [INFO][4607] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" HandleID="k8s-pod-network.48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Workload="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" Jan 13 20:35:00.295328 containerd[1550]: 2025-01-13 20:35:00.276 [INFO][4563] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Namespace="kube-system" Pod="coredns-6f6b679f8f-9m4k5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b9e9899a-a387-4873-a44f-5621625ff114", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-9m4k5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35674fe6735", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.295328 containerd[1550]: 2025-01-13 20:35:00.276 [INFO][4563] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Namespace="kube-system" Pod="coredns-6f6b679f8f-9m4k5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" Jan 13 20:35:00.295328 containerd[1550]: 2025-01-13 20:35:00.276 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35674fe6735 ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Namespace="kube-system" Pod="coredns-6f6b679f8f-9m4k5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" Jan 13 20:35:00.295328 containerd[1550]: 2025-01-13 20:35:00.286 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Namespace="kube-system" Pod="coredns-6f6b679f8f-9m4k5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" Jan 13 20:35:00.295328 containerd[1550]: 2025-01-13 20:35:00.286 [INFO][4563] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Namespace="kube-system" Pod="coredns-6f6b679f8f-9m4k5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b9e9899a-a387-4873-a44f-5621625ff114", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9", Pod:"coredns-6f6b679f8f-9m4k5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35674fe6735", MAC:"f2:5a:9c:c1:9a:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.295328 containerd[1550]: 2025-01-13 20:35:00.292 [INFO][4563] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9" Namespace="kube-system" Pod="coredns-6f6b679f8f-9m4k5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9m4k5-eth0" Jan 13 20:35:00.327775 containerd[1550]: time="2025-01-13T20:35:00.324077643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:00.327775 containerd[1550]: time="2025-01-13T20:35:00.324199748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:00.327775 containerd[1550]: time="2025-01-13T20:35:00.324210237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.327775 containerd[1550]: time="2025-01-13T20:35:00.326807028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.344875 systemd[1]: Started cri-containerd-48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9.scope - libcontainer container 48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9. Jan 13 20:35:00.356884 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:35:00.385197 systemd-networkd[1451]: cali3c0a3570220: Link UP Jan 13 20:35:00.385632 systemd-networkd[1451]: cali3c0a3570220: Gained carrier Jan 13 20:35:00.389181 containerd[1550]: time="2025-01-13T20:35:00.388928041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9m4k5,Uid:b9e9899a-a387-4873-a44f-5621625ff114,Namespace:kube-system,Attempt:5,} returns sandbox id \"48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9\"" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:34:59.732 [INFO][4508] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:34:59.808 [INFO][4508] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9s7kx-eth0 csi-node-driver- calico-system 33616460-2f9b-481c-b406-8a3838ed8c9e 577 0 2025-01-13 20:34:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9s7kx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3c0a3570220 [] []}} ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Namespace="calico-system" Pod="csi-node-driver-9s7kx" WorkloadEndpoint="localhost-k8s-csi--node--driver--9s7kx-" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:34:59.808 [INFO][4508] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Namespace="calico-system" Pod="csi-node-driver-9s7kx" WorkloadEndpoint="localhost-k8s-csi--node--driver--9s7kx-eth0" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.147 [INFO][4600] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" HandleID="k8s-pod-network.e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Workload="localhost-k8s-csi--node--driver--9s7kx-eth0" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.165 [INFO][4600] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" HandleID="k8s-pod-network.e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Workload="localhost-k8s-csi--node--driver--9s7kx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003618e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9s7kx", "timestamp":"2025-01-13 20:35:00.14705005 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.165 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.273 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.274 [INFO][4600] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.275 [INFO][4600] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" host="localhost" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.361 [INFO][4600] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.368 [INFO][4600] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.369 [INFO][4600] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.370 [INFO][4600] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.370 [INFO][4600] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" host="localhost" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.371 [INFO][4600] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.373 [INFO][4600] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" host="localhost" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.381 [INFO][4600] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" host="localhost" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.381 [INFO][4600] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" host="localhost" Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.381 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:00.396304 containerd[1550]: 2025-01-13 20:35:00.381 [INFO][4600] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" HandleID="k8s-pod-network.e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Workload="localhost-k8s-csi--node--driver--9s7kx-eth0" Jan 13 20:35:00.396723 containerd[1550]: 2025-01-13 20:35:00.383 [INFO][4508] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Namespace="calico-system" Pod="csi-node-driver-9s7kx" WorkloadEndpoint="localhost-k8s-csi--node--driver--9s7kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9s7kx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33616460-2f9b-481c-b406-8a3838ed8c9e", ResourceVersion:"577", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9s7kx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c0a3570220", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.396723 containerd[1550]: 2025-01-13 20:35:00.383 [INFO][4508] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Namespace="calico-system" Pod="csi-node-driver-9s7kx" WorkloadEndpoint="localhost-k8s-csi--node--driver--9s7kx-eth0" Jan 13 20:35:00.396723 containerd[1550]: 2025-01-13 20:35:00.383 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c0a3570220 ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Namespace="calico-system" Pod="csi-node-driver-9s7kx" WorkloadEndpoint="localhost-k8s-csi--node--driver--9s7kx-eth0" Jan 13 20:35:00.396723 containerd[1550]: 2025-01-13 20:35:00.385 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Namespace="calico-system" Pod="csi-node-driver-9s7kx" WorkloadEndpoint="localhost-k8s-csi--node--driver--9s7kx-eth0" Jan 13 20:35:00.396723 containerd[1550]: 2025-01-13 20:35:00.386 [INFO][4508] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Namespace="calico-system" Pod="csi-node-driver-9s7kx" WorkloadEndpoint="localhost-k8s-csi--node--driver--9s7kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9s7kx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33616460-2f9b-481c-b406-8a3838ed8c9e", ResourceVersion:"577", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f", Pod:"csi-node-driver-9s7kx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c0a3570220", MAC:"5e:f4:cf:89:b1:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.396723 containerd[1550]: 2025-01-13 20:35:00.395 [INFO][4508] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f" Namespace="calico-system" Pod="csi-node-driver-9s7kx" WorkloadEndpoint="localhost-k8s-csi--node--driver--9s7kx-eth0" Jan 13 20:35:00.403611 containerd[1550]: time="2025-01-13T20:35:00.403411837Z" level=info msg="CreateContainer within sandbox \"48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:35:00.413598 containerd[1550]: time="2025-01-13T20:35:00.413383302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:00.413598 containerd[1550]: time="2025-01-13T20:35:00.413508289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:00.413598 containerd[1550]: time="2025-01-13T20:35:00.413519503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.414056 containerd[1550]: time="2025-01-13T20:35:00.413835870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.425032 containerd[1550]: time="2025-01-13T20:35:00.425011225Z" level=info msg="CreateContainer within sandbox \"48433f840295dade5e17a1a292c7e92251a63676b564d0c75e88c45924fa3cc9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cf71b73ee22a372aae48c161f9e3c503875f2f8e0af1db96edfd75d3b30db97\"" Jan 13 20:35:00.425994 containerd[1550]: time="2025-01-13T20:35:00.425383087Z" level=info msg="StartContainer for \"2cf71b73ee22a372aae48c161f9e3c503875f2f8e0af1db96edfd75d3b30db97\"" Jan 13 20:35:00.426908 systemd[1]: Started cri-containerd-e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f.scope - libcontainer container e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f. Jan 13 20:35:00.437249 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:35:00.445979 containerd[1550]: time="2025-01-13T20:35:00.445957164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9s7kx,Uid:33616460-2f9b-481c-b406-8a3838ed8c9e,Namespace:calico-system,Attempt:5,} returns sandbox id \"e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f\"" Jan 13 20:35:00.448202 containerd[1550]: time="2025-01-13T20:35:00.448186923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:35:00.449990 systemd[1]: Started cri-containerd-2cf71b73ee22a372aae48c161f9e3c503875f2f8e0af1db96edfd75d3b30db97.scope - libcontainer container 2cf71b73ee22a372aae48c161f9e3c503875f2f8e0af1db96edfd75d3b30db97. Jan 13 20:35:00.479546 containerd[1550]: time="2025-01-13T20:35:00.479519645Z" level=info msg="StartContainer for \"2cf71b73ee22a372aae48c161f9e3c503875f2f8e0af1db96edfd75d3b30db97\" returns successfully" Jan 13 20:35:00.491843 systemd-networkd[1451]: calicfaf9eba73e: Link UP Jan 13 20:35:00.492455 systemd-networkd[1451]: calicfaf9eba73e: Gained carrier Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:34:59.799 [INFO][4529] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:34:59.820 [INFO][4529] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0 calico-apiserver-6b6bf69758- calico-apiserver c2088615-f28b-4452-a74d-fc3302061b14 658 0 2025-01-13 20:34:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b6bf69758 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b6bf69758-q2k5c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicfaf9eba73e [] []}} ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-q2k5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:34:59.820 [INFO][4529] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-q2k5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.146 [INFO][4597] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" HandleID="k8s-pod-network.ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Workload="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.165 [INFO][4597] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" HandleID="k8s-pod-network.ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Workload="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000414a50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b6bf69758-q2k5c", "timestamp":"2025-01-13 20:35:00.146877232 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.165 [INFO][4597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.381 [INFO][4597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.381 [INFO][4597] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.383 [INFO][4597] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" host="localhost" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.462 [INFO][4597] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.470 [INFO][4597] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.472 [INFO][4597] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.476 [INFO][4597] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.476 [INFO][4597] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" host="localhost" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.477 [INFO][4597] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.480 [INFO][4597] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" host="localhost" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.485 [INFO][4597] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" host="localhost" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.485 [INFO][4597] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" host="localhost" Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.486 [INFO][4597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:00.502518 containerd[1550]: 2025-01-13 20:35:00.486 [INFO][4597] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" HandleID="k8s-pod-network.ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Workload="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" Jan 13 20:35:00.503012 containerd[1550]: 2025-01-13 20:35:00.488 [INFO][4529] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-q2k5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0", GenerateName:"calico-apiserver-6b6bf69758-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2088615-f28b-4452-a74d-fc3302061b14", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b6bf69758", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b6bf69758-q2k5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfaf9eba73e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.503012 containerd[1550]: 2025-01-13 20:35:00.488 [INFO][4529] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-q2k5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" Jan 13 20:35:00.503012 containerd[1550]: 2025-01-13 20:35:00.489 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfaf9eba73e ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-q2k5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" Jan 13 20:35:00.503012 containerd[1550]: 2025-01-13 20:35:00.493 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-q2k5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" Jan 13 20:35:00.503012 containerd[1550]: 2025-01-13 20:35:00.493 [INFO][4529] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-q2k5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0", GenerateName:"calico-apiserver-6b6bf69758-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2088615-f28b-4452-a74d-fc3302061b14", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b6bf69758", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f", Pod:"calico-apiserver-6b6bf69758-q2k5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfaf9eba73e", MAC:"3a:58:0e:83:3b:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.503012 containerd[1550]: 2025-01-13 20:35:00.501 [INFO][4529] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-q2k5c" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--q2k5c-eth0" Jan 13 20:35:00.515478 containerd[1550]: time="2025-01-13T20:35:00.515423437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:00.517357 containerd[1550]: time="2025-01-13T20:35:00.515837975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:00.517357 containerd[1550]: time="2025-01-13T20:35:00.515866361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.519832 containerd[1550]: time="2025-01-13T20:35:00.519807685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.531890 systemd[1]: Started cri-containerd-ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f.scope - libcontainer container ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f. Jan 13 20:35:00.538947 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:35:00.566413 containerd[1550]: time="2025-01-13T20:35:00.566364583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-q2k5c,Uid:c2088615-f28b-4452-a74d-fc3302061b14,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f\"" Jan 13 20:35:00.582821 systemd-networkd[1451]: cali23228f064c3: Link UP Jan 13 20:35:00.583326 systemd-networkd[1451]: cali23228f064c3: Gained carrier Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:34:59.819 [INFO][4554] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:34:59.841 [INFO][4554] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0 calico-kube-controllers-974644f4b- calico-system e8e472de-0de2-4914-879e-86eb6e7cf184 656 0 2025-01-13 20:34:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:974644f4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-974644f4b-zs7h8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali23228f064c3 [] []}} ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Namespace="calico-system" Pod="calico-kube-controllers-974644f4b-zs7h8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:34:59.841 [INFO][4554] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Namespace="calico-system" Pod="calico-kube-controllers-974644f4b-zs7h8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.146 [INFO][4601] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" HandleID="k8s-pod-network.790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Workload="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.166 [INFO][4601] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" HandleID="k8s-pod-network.790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Workload="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003972c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-974644f4b-zs7h8", "timestamp":"2025-01-13 20:35:00.146883407 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.166 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.486 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.486 [INFO][4601] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.487 [INFO][4601] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" host="localhost" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.563 [INFO][4601] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.569 [INFO][4601] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.570 [INFO][4601] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.572 [INFO][4601] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.572 [INFO][4601] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" host="localhost" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.573 [INFO][4601] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.575 [INFO][4601] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" host="localhost" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.579 [INFO][4601] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" host="localhost" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.579 [INFO][4601] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" host="localhost" Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.579 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:00.592538 containerd[1550]: 2025-01-13 20:35:00.579 [INFO][4601] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" HandleID="k8s-pod-network.790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Workload="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" Jan 13 20:35:00.594133 containerd[1550]: 2025-01-13 20:35:00.580 [INFO][4554] cni-plugin/k8s.go 386: Populated endpoint ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Namespace="calico-system" Pod="calico-kube-controllers-974644f4b-zs7h8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0", GenerateName:"calico-kube-controllers-974644f4b-", Namespace:"calico-system", SelfLink:"", UID:"e8e472de-0de2-4914-879e-86eb6e7cf184", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"974644f4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-974644f4b-zs7h8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23228f064c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.594133 containerd[1550]: 2025-01-13 20:35:00.580 [INFO][4554] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Namespace="calico-system" Pod="calico-kube-controllers-974644f4b-zs7h8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" Jan 13 20:35:00.594133 containerd[1550]: 2025-01-13 20:35:00.580 [INFO][4554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23228f064c3 ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Namespace="calico-system" Pod="calico-kube-controllers-974644f4b-zs7h8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" Jan 13 20:35:00.594133 containerd[1550]: 2025-01-13 20:35:00.583 [INFO][4554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Namespace="calico-system" Pod="calico-kube-controllers-974644f4b-zs7h8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" Jan 13 20:35:00.594133 containerd[1550]: 2025-01-13 20:35:00.583 [INFO][4554] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Namespace="calico-system" Pod="calico-kube-controllers-974644f4b-zs7h8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0", GenerateName:"calico-kube-controllers-974644f4b-", Namespace:"calico-system", SelfLink:"", UID:"e8e472de-0de2-4914-879e-86eb6e7cf184", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"974644f4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a", Pod:"calico-kube-controllers-974644f4b-zs7h8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23228f064c3", MAC:"be:f1:c0:ae:0a:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.594133 containerd[1550]: 2025-01-13 20:35:00.591 [INFO][4554] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a" Namespace="calico-system" Pod="calico-kube-controllers-974644f4b-zs7h8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--974644f4b--zs7h8-eth0" Jan 13 20:35:00.605130 containerd[1550]: time="2025-01-13T20:35:00.605050892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:00.605130 containerd[1550]: time="2025-01-13T20:35:00.605089048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:00.605130 containerd[1550]: time="2025-01-13T20:35:00.605097452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.605330 containerd[1550]: time="2025-01-13T20:35:00.605152299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.617857 systemd[1]: Started cri-containerd-790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a.scope - libcontainer container 790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a. Jan 13 20:35:00.624979 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:35:00.647912 containerd[1550]: time="2025-01-13T20:35:00.647886737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-974644f4b-zs7h8,Uid:e8e472de-0de2-4914-879e-86eb6e7cf184,Namespace:calico-system,Attempt:5,} returns sandbox id \"790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a\"" Jan 13 20:35:00.681063 systemd-networkd[1451]: calidb7affc71bb: Link UP Jan 13 20:35:00.681894 systemd-networkd[1451]: calidb7affc71bb: Gained carrier Jan 13 20:35:00.694136 containerd[1550]: time="2025-01-13T20:35:00.694026078Z" level=info msg="StopPodSandbox for \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\"" Jan 13 20:35:00.694136 containerd[1550]: time="2025-01-13T20:35:00.694088665Z" level=info msg="TearDown network for sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\" successfully" Jan 13 20:35:00.694136 containerd[1550]: time="2025-01-13T20:35:00.694095733Z" level=info msg="StopPodSandbox for \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\" returns successfully" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.695348798Z" level=info msg="StopPodSandbox for \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\"" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.695395243Z" level=info msg="TearDown network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" successfully" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.695402623Z" level=info msg="StopPodSandbox for \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" returns successfully" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.697852934Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\"" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.697901729Z" level=info msg="TearDown network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" successfully" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.697908432Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" returns successfully" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.698071825Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\"" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.698111327Z" level=info msg="TearDown network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" successfully" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.698117544Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" returns successfully" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.698231869Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.698308319Z" level=info msg="TearDown network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" successfully" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.698318777Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" returns successfully" Jan 13 20:35:00.703629 containerd[1550]: time="2025-01-13T20:35:00.698551025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:5,}" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:34:59.844 [INFO][4540] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:34:59.855 [INFO][4540] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0 calico-apiserver-6b6bf69758- calico-apiserver bf5a440e-479f-4a51-bb48-dec4cff63ae8 659 0 2025-01-13 20:34:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b6bf69758 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b6bf69758-whhr8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb7affc71bb [] []}} ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-whhr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:34:59.856 [INFO][4540] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-whhr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.146 [INFO][4606] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" HandleID="k8s-pod-network.069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Workload="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.166 [INFO][4606] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" HandleID="k8s-pod-network.069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Workload="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec9e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b6bf69758-whhr8", "timestamp":"2025-01-13 20:35:00.146944467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.166 [INFO][4606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.579 [INFO][4606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.579 [INFO][4606] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.590 [INFO][4606] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" host="localhost" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.663 [INFO][4606] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.669 [INFO][4606] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.670 [INFO][4606] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.671 [INFO][4606] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.671 [INFO][4606] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" host="localhost" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.672 [INFO][4606] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7 Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.674 [INFO][4606] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" host="localhost" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.677 [INFO][4606] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" host="localhost" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.677 [INFO][4606] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" host="localhost" Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.677 [INFO][4606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:00.713862 containerd[1550]: 2025-01-13 20:35:00.677 [INFO][4606] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" HandleID="k8s-pod-network.069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Workload="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" Jan 13 20:35:00.714521 containerd[1550]: 2025-01-13 20:35:00.679 [INFO][4540] cni-plugin/k8s.go 386: Populated endpoint ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-whhr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0", GenerateName:"calico-apiserver-6b6bf69758-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf5a440e-479f-4a51-bb48-dec4cff63ae8", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b6bf69758", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b6bf69758-whhr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb7affc71bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.714521 containerd[1550]: 2025-01-13 20:35:00.679 [INFO][4540] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-whhr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" Jan 13 20:35:00.714521 containerd[1550]: 2025-01-13 20:35:00.679 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb7affc71bb ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-whhr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" Jan 13 20:35:00.714521 containerd[1550]: 2025-01-13 20:35:00.681 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-whhr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" Jan 13 20:35:00.714521 containerd[1550]: 2025-01-13 20:35:00.682 [INFO][4540] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-whhr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0", GenerateName:"calico-apiserver-6b6bf69758-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf5a440e-479f-4a51-bb48-dec4cff63ae8", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b6bf69758", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7", Pod:"calico-apiserver-6b6bf69758-whhr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb7affc71bb", MAC:"1e:b5:1c:d1:e7:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.714521 containerd[1550]: 2025-01-13 20:35:00.710 [INFO][4540] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7" Namespace="calico-apiserver" Pod="calico-apiserver-6b6bf69758-whhr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b6bf69758--whhr8-eth0" Jan 13 20:35:00.746508 containerd[1550]: time="2025-01-13T20:35:00.743156042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:00.746508 containerd[1550]: time="2025-01-13T20:35:00.746347570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:00.746508 containerd[1550]: time="2025-01-13T20:35:00.746361983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.746508 containerd[1550]: time="2025-01-13T20:35:00.746435376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:00.779889 systemd[1]: Started cri-containerd-069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7.scope - libcontainer container 069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7. Jan 13 20:35:00.823877 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:35:00.857629 containerd[1550]: time="2025-01-13T20:35:00.857588585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6bf69758-whhr8,Uid:bf5a440e-479f-4a51-bb48-dec4cff63ae8,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7\"" Jan 13 20:35:00.962851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847427244.mount: Deactivated successfully. Jan 13 20:35:00.965023 systemd-networkd[1451]: calic48ebaf5871: Link UP Jan 13 20:35:00.966071 systemd-networkd[1451]: calic48ebaf5871: Gained carrier Jan 13 20:35:00.987466 kubelet[2793]: I0113 20:35:00.987419 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9m4k5" podStartSLOduration=22.98739456 podStartE2EDuration="22.98739456s" podCreationTimestamp="2025-01-13 20:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:00.746331334 +0000 UTC m=+28.629731184" watchObservedRunningTime="2025-01-13 20:35:00.98739456 +0000 UTC m=+28.870794404" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.782 [INFO][4939] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.800 [INFO][4939] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0 coredns-6f6b679f8f- kube-system 3c427557-4954-4485-9e6f-15dda2108a10 747 0 2025-01-13 20:34:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-wzl8j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic48ebaf5871 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzl8j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wzl8j-" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.800 [INFO][4939] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzl8j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.832 [INFO][4971] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" HandleID="k8s-pod-network.56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Workload="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.839 [INFO][4971] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" HandleID="k8s-pod-network.56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Workload="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290810), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-wzl8j", "timestamp":"2025-01-13 20:35:00.832754182 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.839 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.839 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.839 [INFO][4971] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.840 [INFO][4971] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" host="localhost" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.938 [INFO][4971] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.941 [INFO][4971] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.942 [INFO][4971] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.944 [INFO][4971] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.944 [INFO][4971] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" host="localhost" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.945 [INFO][4971] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334 Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.948 [INFO][4971] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" host="localhost" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.961 [INFO][4971] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" host="localhost" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.961 [INFO][4971] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" host="localhost" Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.961 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:00.988864 containerd[1550]: 2025-01-13 20:35:00.961 [INFO][4971] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" HandleID="k8s-pod-network.56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Workload="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:00.989362 containerd[1550]: 2025-01-13 20:35:00.963 [INFO][4939] cni-plugin/k8s.go 386: Populated endpoint ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzl8j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3c427557-4954-4485-9e6f-15dda2108a10", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-wzl8j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic48ebaf5871", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.989362 containerd[1550]: 2025-01-13 20:35:00.963 [INFO][4939] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzl8j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:00.989362 containerd[1550]: 2025-01-13 20:35:00.963 [INFO][4939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic48ebaf5871 ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzl8j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:00.989362 containerd[1550]: 2025-01-13 20:35:00.966 [INFO][4939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzl8j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:00.989362 containerd[1550]: 2025-01-13 20:35:00.966 [INFO][4939] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzl8j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3c427557-4954-4485-9e6f-15dda2108a10", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334", Pod:"coredns-6f6b679f8f-wzl8j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic48ebaf5871", MAC:"76:2d:94:90:1e:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:00.989362 containerd[1550]: 2025-01-13 20:35:00.987 [INFO][4939] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334" Namespace="kube-system" Pod="coredns-6f6b679f8f-wzl8j" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wzl8j-eth0" Jan 13 20:35:01.029929 containerd[1550]: time="2025-01-13T20:35:01.029868049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:01.029929 containerd[1550]: time="2025-01-13T20:35:01.029905767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:01.029929 containerd[1550]: time="2025-01-13T20:35:01.029913547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:01.030195 containerd[1550]: time="2025-01-13T20:35:01.029956519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:01.044862 systemd[1]: Started cri-containerd-56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334.scope - libcontainer container 56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334. Jan 13 20:35:01.053581 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:35:01.074358 containerd[1550]: time="2025-01-13T20:35:01.074211656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzl8j,Uid:3c427557-4954-4485-9e6f-15dda2108a10,Namespace:kube-system,Attempt:5,} returns sandbox id \"56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334\"" Jan 13 20:35:01.075853 containerd[1550]: time="2025-01-13T20:35:01.075814304Z" level=info msg="CreateContainer within sandbox \"56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:35:01.115026 containerd[1550]: time="2025-01-13T20:35:01.114997828Z" level=info msg="CreateContainer within sandbox \"56a7a95a8447c14053d4d0262023016001e04467dfe27098b49b733ffefe2334\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06a703b70651a1b9f1b5592d3506b702fa0ae2f1355084dc2ee68eb3ebdb4c14\"" Jan 13 20:35:01.115491 containerd[1550]: time="2025-01-13T20:35:01.115474732Z" level=info msg="StartContainer for \"06a703b70651a1b9f1b5592d3506b702fa0ae2f1355084dc2ee68eb3ebdb4c14\"" Jan 13 20:35:01.143900 systemd[1]: Started cri-containerd-06a703b70651a1b9f1b5592d3506b702fa0ae2f1355084dc2ee68eb3ebdb4c14.scope - libcontainer container 06a703b70651a1b9f1b5592d3506b702fa0ae2f1355084dc2ee68eb3ebdb4c14. Jan 13 20:35:01.169128 containerd[1550]: time="2025-01-13T20:35:01.169107090Z" level=info msg="StartContainer for \"06a703b70651a1b9f1b5592d3506b702fa0ae2f1355084dc2ee68eb3ebdb4c14\" returns successfully" Jan 13 20:35:01.356824 kernel: bpftool[5188]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:35:01.531022 systemd-networkd[1451]: vxlan.calico: Link UP Jan 13 20:35:01.531028 systemd-networkd[1451]: vxlan.calico: Gained carrier Jan 13 20:35:01.723234 kubelet[2793]: I0113 20:35:01.723101 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wzl8j" podStartSLOduration=23.722214069 podStartE2EDuration="23.722214069s" podCreationTimestamp="2025-01-13 20:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:01.708358624 +0000 UTC m=+29.591758474" watchObservedRunningTime="2025-01-13 20:35:01.722214069 +0000 UTC m=+29.605613914" Jan 13 20:35:01.818941 systemd-networkd[1451]: cali3c0a3570220: Gained IPv6LL Jan 13 20:35:01.882894 systemd-networkd[1451]: calidb7affc71bb: Gained IPv6LL Jan 13 20:35:02.074900 systemd-networkd[1451]: cali23228f064c3: Gained IPv6LL Jan 13 20:35:02.138844 systemd-networkd[1451]: calicfaf9eba73e: Gained IPv6LL Jan 13 20:35:02.268298 systemd-networkd[1451]: cali35674fe6735: Gained IPv6LL Jan 13 20:35:02.394975 systemd-networkd[1451]: calic48ebaf5871: Gained IPv6LL Jan 13 20:35:02.485755 containerd[1550]: time="2025-01-13T20:35:02.485386803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:02.486255 containerd[1550]: time="2025-01-13T20:35:02.486217805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:35:02.486499 containerd[1550]: time="2025-01-13T20:35:02.486488449Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:02.487997 containerd[1550]: time="2025-01-13T20:35:02.487969087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:02.488445 containerd[1550]: time="2025-01-13T20:35:02.488433550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.040121595s" Jan 13 20:35:02.488665 containerd[1550]: time="2025-01-13T20:35:02.488638956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:35:02.489283 containerd[1550]: time="2025-01-13T20:35:02.489156044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:35:02.490340 containerd[1550]: time="2025-01-13T20:35:02.490277310Z" level=info msg="CreateContainer within sandbox \"e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:35:02.497865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164980426.mount: Deactivated successfully. Jan 13 20:35:02.510693 containerd[1550]: time="2025-01-13T20:35:02.510675664Z" level=info msg="CreateContainer within sandbox \"e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2776445d322c7c1ff49ef8e8019d0477214846a1486569352ee0b92f2076adb2\"" Jan 13 20:35:02.511122 containerd[1550]: time="2025-01-13T20:35:02.511106703Z" level=info msg="StartContainer for \"2776445d322c7c1ff49ef8e8019d0477214846a1486569352ee0b92f2076adb2\"" Jan 13 20:35:02.535841 systemd[1]: Started cri-containerd-2776445d322c7c1ff49ef8e8019d0477214846a1486569352ee0b92f2076adb2.scope - libcontainer container 2776445d322c7c1ff49ef8e8019d0477214846a1486569352ee0b92f2076adb2. Jan 13 20:35:02.558254 containerd[1550]: time="2025-01-13T20:35:02.558203039Z" level=info msg="StartContainer for \"2776445d322c7c1ff49ef8e8019d0477214846a1486569352ee0b92f2076adb2\" returns successfully" Jan 13 20:35:03.034904 systemd-networkd[1451]: vxlan.calico: Gained IPv6LL Jan 13 20:35:05.697386 containerd[1550]: time="2025-01-13T20:35:05.697348600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:05.698191 containerd[1550]: time="2025-01-13T20:35:05.698160121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 20:35:05.698853 containerd[1550]: time="2025-01-13T20:35:05.698830835Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:05.704770 containerd[1550]: time="2025-01-13T20:35:05.703719195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:05.704770 containerd[1550]: time="2025-01-13T20:35:05.704071926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.214899886s" Jan 13 20:35:05.704770 containerd[1550]: time="2025-01-13T20:35:05.704093702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:35:05.706735 containerd[1550]: time="2025-01-13T20:35:05.706719674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:35:05.707694 containerd[1550]: time="2025-01-13T20:35:05.707673128Z" level=info msg="CreateContainer within sandbox \"ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:35:05.725204 containerd[1550]: time="2025-01-13T20:35:05.725176369Z" level=info msg="CreateContainer within sandbox \"ce5421f6d361418c27caa131319d2f921b5f16312e32e7c13cb86be5a8703b3f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0130a4e1da952e3e2c658635481300bee02441158b29eace845e790b95b98b40\"" Jan 13 20:35:05.725524 containerd[1550]: time="2025-01-13T20:35:05.725499252Z" level=info msg="StartContainer for \"0130a4e1da952e3e2c658635481300bee02441158b29eace845e790b95b98b40\"" Jan 13 20:35:05.763843 systemd[1]: Started cri-containerd-0130a4e1da952e3e2c658635481300bee02441158b29eace845e790b95b98b40.scope - libcontainer container 0130a4e1da952e3e2c658635481300bee02441158b29eace845e790b95b98b40. Jan 13 20:35:05.790728 containerd[1550]: time="2025-01-13T20:35:05.790664025Z" level=info msg="StartContainer for \"0130a4e1da952e3e2c658635481300bee02441158b29eace845e790b95b98b40\" returns successfully" Jan 13 20:35:07.736709 kubelet[2793]: I0113 20:35:07.736691 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:07.848832 containerd[1550]: time="2025-01-13T20:35:07.848796590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:07.855812 containerd[1550]: time="2025-01-13T20:35:07.855779971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 20:35:07.860834 containerd[1550]: time="2025-01-13T20:35:07.860749064Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:07.866850 containerd[1550]: time="2025-01-13T20:35:07.866808491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:07.867449 containerd[1550]: time="2025-01-13T20:35:07.867292212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.159933066s" Jan 13 20:35:07.867449 containerd[1550]: time="2025-01-13T20:35:07.867314561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 20:35:07.868448 containerd[1550]: time="2025-01-13T20:35:07.868429702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:35:07.898806 containerd[1550]: time="2025-01-13T20:35:07.898661374Z" level=info msg="CreateContainer within sandbox \"790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:35:07.908222 containerd[1550]: time="2025-01-13T20:35:07.908159999Z" level=info msg="CreateContainer within sandbox \"790d1b689f5bcecdc88f3d2bd0a915b4f2858e88320f0f1aeec5bfe58ed2316a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"120789ec78842d8088d7882f1b78463b228687793c6338936629f20f50f2017b\"" Jan 13 20:35:07.908804 containerd[1550]: time="2025-01-13T20:35:07.908488277Z" level=info msg="StartContainer for \"120789ec78842d8088d7882f1b78463b228687793c6338936629f20f50f2017b\"" Jan 13 20:35:07.910292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560523628.mount: Deactivated successfully. Jan 13 20:35:07.937846 systemd[1]: Started cri-containerd-120789ec78842d8088d7882f1b78463b228687793c6338936629f20f50f2017b.scope - libcontainer container 120789ec78842d8088d7882f1b78463b228687793c6338936629f20f50f2017b. Jan 13 20:35:07.974455 containerd[1550]: time="2025-01-13T20:35:07.974339358Z" level=info msg="StartContainer for \"120789ec78842d8088d7882f1b78463b228687793c6338936629f20f50f2017b\" returns successfully" Jan 13 20:35:08.261975 containerd[1550]: time="2025-01-13T20:35:08.261929483Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:08.262568 containerd[1550]: time="2025-01-13T20:35:08.262388798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:35:08.263662 containerd[1550]: time="2025-01-13T20:35:08.263645730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 395.195267ms" Jan 13 20:35:08.263705 containerd[1550]: time="2025-01-13T20:35:08.263664469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:35:08.264856 containerd[1550]: time="2025-01-13T20:35:08.264444786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:35:08.266289 containerd[1550]: time="2025-01-13T20:35:08.266186480Z" level=info msg="CreateContainer within sandbox \"069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:35:08.279345 containerd[1550]: time="2025-01-13T20:35:08.279316734Z" level=info msg="CreateContainer within sandbox \"069f116b616bb5a0bcf068811c00ff0f02621cdbcbeefb7f416ec07d67f1c2b7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0e54cb5b218355c39281f694503c5455300f6be4f4e213d189fa458ea8f14dcd\"" Jan 13 20:35:08.279860 containerd[1550]: time="2025-01-13T20:35:08.279730398Z" level=info msg="StartContainer for \"0e54cb5b218355c39281f694503c5455300f6be4f4e213d189fa458ea8f14dcd\"" Jan 13 20:35:08.296935 systemd[1]: Started cri-containerd-0e54cb5b218355c39281f694503c5455300f6be4f4e213d189fa458ea8f14dcd.scope - libcontainer container 0e54cb5b218355c39281f694503c5455300f6be4f4e213d189fa458ea8f14dcd. Jan 13 20:35:08.329335 containerd[1550]: time="2025-01-13T20:35:08.329182396Z" level=info msg="StartContainer for \"0e54cb5b218355c39281f694503c5455300f6be4f4e213d189fa458ea8f14dcd\" returns successfully" Jan 13 20:35:09.027073 kubelet[2793]: I0113 20:35:09.026486 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b6bf69758-q2k5c" podStartSLOduration=19.886958145 podStartE2EDuration="25.026471489s" podCreationTimestamp="2025-01-13 20:34:44 +0000 UTC" firstStartedPulling="2025-01-13 20:35:00.567129076 +0000 UTC m=+28.450528917" lastFinishedPulling="2025-01-13 20:35:05.706642415 +0000 UTC m=+33.590042261" observedRunningTime="2025-01-13 20:35:06.738577775 +0000 UTC m=+34.621977625" watchObservedRunningTime="2025-01-13 20:35:09.026471489 +0000 UTC m=+36.909871343" Jan 13 20:35:09.029399 kubelet[2793]: I0113 20:35:09.029183 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:09.053622 kubelet[2793]: I0113 20:35:09.053248 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b6bf69758-whhr8" podStartSLOduration=17.647215160000002 podStartE2EDuration="25.053236877s" podCreationTimestamp="2025-01-13 20:34:44 +0000 UTC" firstStartedPulling="2025-01-13 20:35:00.858279095 +0000 UTC m=+28.741678936" lastFinishedPulling="2025-01-13 20:35:08.264300812 +0000 UTC m=+36.147700653" observedRunningTime="2025-01-13 20:35:09.026800737 +0000 UTC m=+36.910200584" watchObservedRunningTime="2025-01-13 20:35:09.053236877 +0000 UTC m=+36.936636722" Jan 13 20:35:09.065929 kubelet[2793]: I0113 20:35:09.065776 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-974644f4b-zs7h8" podStartSLOduration=17.846262107 podStartE2EDuration="25.065745971s" podCreationTimestamp="2025-01-13 20:34:44 +0000 UTC" firstStartedPulling="2025-01-13 20:35:00.648693686 +0000 UTC m=+28.532093527" lastFinishedPulling="2025-01-13 20:35:07.868177544 +0000 UTC m=+35.751577391" observedRunningTime="2025-01-13 20:35:09.052990243 +0000 UTC m=+36.936390092" watchObservedRunningTime="2025-01-13 20:35:09.065745971 +0000 UTC m=+36.949145815" Jan 13 20:35:09.925851 containerd[1550]: time="2025-01-13T20:35:09.925817473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:09.926862 containerd[1550]: time="2025-01-13T20:35:09.926839129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:35:09.927278 containerd[1550]: time="2025-01-13T20:35:09.927255333Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:09.928413 containerd[1550]: time="2025-01-13T20:35:09.928389742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:09.928916 containerd[1550]: time="2025-01-13T20:35:09.928839701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.664377122s" Jan 13 20:35:09.928916 containerd[1550]: time="2025-01-13T20:35:09.928860904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:35:09.931427 containerd[1550]: time="2025-01-13T20:35:09.930128813Z" level=info msg="CreateContainer within sandbox \"e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:35:09.970277 containerd[1550]: time="2025-01-13T20:35:09.970224510Z" level=info msg="CreateContainer within sandbox \"e1c1f2e0578157fc22343deb9ff83a5d264da57f1cd92f707f688a7f22cb701f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b7d3515f8becccc89a7a171712fd0072bf4ff0d31898fcbc6aad129a890d64f0\"" Jan 13 20:35:09.971549 containerd[1550]: time="2025-01-13T20:35:09.970611693Z" level=info msg="StartContainer for \"b7d3515f8becccc89a7a171712fd0072bf4ff0d31898fcbc6aad129a890d64f0\"" Jan 13 20:35:09.998776 systemd[1]: run-containerd-runc-k8s.io-b7d3515f8becccc89a7a171712fd0072bf4ff0d31898fcbc6aad129a890d64f0-runc.55fIeN.mount: Deactivated successfully. Jan 13 20:35:10.003881 systemd[1]: Started cri-containerd-b7d3515f8becccc89a7a171712fd0072bf4ff0d31898fcbc6aad129a890d64f0.scope - libcontainer container b7d3515f8becccc89a7a171712fd0072bf4ff0d31898fcbc6aad129a890d64f0. Jan 13 20:35:10.046912 containerd[1550]: time="2025-01-13T20:35:10.046887386Z" level=info msg="StartContainer for \"b7d3515f8becccc89a7a171712fd0072bf4ff0d31898fcbc6aad129a890d64f0\" returns successfully" Jan 13 20:35:10.049715 kubelet[2793]: I0113 20:35:10.049649 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:10.049968 kubelet[2793]: I0113 20:35:10.049806 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:10.485673 kubelet[2793]: I0113 20:35:10.485651 2793 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:35:10.491681 kubelet[2793]: I0113 20:35:10.491669 2793 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:35:11.059550 kubelet[2793]: I0113 20:35:11.059226 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9s7kx" podStartSLOduration=17.577497333 podStartE2EDuration="27.059210323s" podCreationTimestamp="2025-01-13 20:34:44 +0000 UTC" firstStartedPulling="2025-01-13 20:35:00.447715511 +0000 UTC m=+28.331115353" lastFinishedPulling="2025-01-13 20:35:09.929428502 +0000 UTC m=+37.812828343" observedRunningTime="2025-01-13 20:35:11.058558604 +0000 UTC m=+38.941958456" watchObservedRunningTime="2025-01-13 20:35:11.059210323 +0000 UTC m=+38.942610175" Jan 13 20:35:11.539007 kubelet[2793]: I0113 20:35:11.538985 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:11.604704 systemd[1]: run-containerd-runc-k8s.io-120789ec78842d8088d7882f1b78463b228687793c6338936629f20f50f2017b-runc.af85EJ.mount: Deactivated successfully. Jan 13 20:35:26.536300 kubelet[2793]: I0113 20:35:26.536132 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:32.278639 containerd[1550]: time="2025-01-13T20:35:32.278559067Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\"" Jan 13 20:35:32.288225 containerd[1550]: time="2025-01-13T20:35:32.278635401Z" level=info msg="TearDown network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" successfully" Jan 13 20:35:32.288225 containerd[1550]: time="2025-01-13T20:35:32.288194270Z" level=info msg="StopPodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" returns successfully" Jan 13 20:35:32.321015 containerd[1550]: time="2025-01-13T20:35:32.320837544Z" level=info msg="RemovePodSandbox for \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\"" Jan 13 20:35:32.330488 containerd[1550]: time="2025-01-13T20:35:32.330469529Z" level=info msg="Forcibly stopping sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\"" Jan 13 20:35:32.331582 containerd[1550]: time="2025-01-13T20:35:32.330529005Z" level=info msg="TearDown network for sandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" successfully" Jan 13 20:35:32.334636 containerd[1550]: time="2025-01-13T20:35:32.334617435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.338996 containerd[1550]: time="2025-01-13T20:35:32.338981176Z" level=info msg="RemovePodSandbox \"bbc0b7b01d8aef129898adf1b7d42106d4add3b2ae138b90e3008446b6b949b3\" returns successfully" Jan 13 20:35:32.343580 containerd[1550]: time="2025-01-13T20:35:32.343562206Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\"" Jan 13 20:35:32.343772 containerd[1550]: time="2025-01-13T20:35:32.343678137Z" level=info msg="TearDown network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" successfully" Jan 13 20:35:32.343772 containerd[1550]: time="2025-01-13T20:35:32.343726257Z" level=info msg="StopPodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" returns successfully" Jan 13 20:35:32.344020 containerd[1550]: time="2025-01-13T20:35:32.343998304Z" level=info msg="RemovePodSandbox for \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\"" Jan 13 20:35:32.344060 containerd[1550]: time="2025-01-13T20:35:32.344019226Z" level=info msg="Forcibly stopping sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\"" Jan 13 20:35:32.344085 containerd[1550]: time="2025-01-13T20:35:32.344064967Z" level=info msg="TearDown network for sandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" successfully" Jan 13 20:35:32.345403 containerd[1550]: time="2025-01-13T20:35:32.345381581Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.345504 containerd[1550]: time="2025-01-13T20:35:32.345416259Z" level=info msg="RemovePodSandbox \"6a46530229a08ba6552a1e6944e21651e830f47907342f965b32f9b1bb3e4f96\" returns successfully" Jan 13 20:35:32.345969 containerd[1550]: time="2025-01-13T20:35:32.345741525Z" level=info msg="StopPodSandbox for \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\"" Jan 13 20:35:32.345969 containerd[1550]: time="2025-01-13T20:35:32.345801622Z" level=info msg="TearDown network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" successfully" Jan 13 20:35:32.345969 containerd[1550]: time="2025-01-13T20:35:32.345810639Z" level=info msg="StopPodSandbox for \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" returns successfully" Jan 13 20:35:32.346162 containerd[1550]: time="2025-01-13T20:35:32.346073661Z" level=info msg="RemovePodSandbox for \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\"" Jan 13 20:35:32.346162 containerd[1550]: time="2025-01-13T20:35:32.346086062Z" level=info msg="Forcibly stopping sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\"" Jan 13 20:35:32.346400 containerd[1550]: time="2025-01-13T20:35:32.346248974Z" level=info msg="TearDown network for sandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" successfully" Jan 13 20:35:32.347715 containerd[1550]: time="2025-01-13T20:35:32.347649570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.347715 containerd[1550]: time="2025-01-13T20:35:32.347680393Z" level=info msg="RemovePodSandbox \"4874c588ac9eaa3a7b3fe68756fa4c93a42ac68893b6e6eeaa2cd52f6809149d\" returns successfully" Jan 13 20:35:32.347891 containerd[1550]: time="2025-01-13T20:35:32.347841964Z" level=info msg="StopPodSandbox for \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\"" Jan 13 20:35:32.347918 containerd[1550]: time="2025-01-13T20:35:32.347891353Z" level=info msg="TearDown network for sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\" successfully" Jan 13 20:35:32.347918 containerd[1550]: time="2025-01-13T20:35:32.347898585Z" level=info msg="StopPodSandbox for \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\" returns successfully" Jan 13 20:35:32.356877 containerd[1550]: time="2025-01-13T20:35:32.356788182Z" level=info msg="RemovePodSandbox for \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\"" Jan 13 20:35:32.356877 containerd[1550]: time="2025-01-13T20:35:32.356802537Z" level=info msg="Forcibly stopping sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\"" Jan 13 20:35:32.356877 containerd[1550]: time="2025-01-13T20:35:32.356833138Z" level=info msg="TearDown network for sandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\" successfully" Jan 13 20:35:32.357867 containerd[1550]: time="2025-01-13T20:35:32.357853260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.357966 containerd[1550]: time="2025-01-13T20:35:32.357874781Z" level=info msg="RemovePodSandbox \"b2af76c62ba8d1d9f1fd642b47080894767bd73e667947e7ec583becd060a082\" returns successfully" Jan 13 20:35:32.358105 containerd[1550]: time="2025-01-13T20:35:32.358032948Z" level=info msg="StopPodSandbox for \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\"" Jan 13 20:35:32.358105 containerd[1550]: time="2025-01-13T20:35:32.358071686Z" level=info msg="TearDown network for sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\" successfully" Jan 13 20:35:32.358105 containerd[1550]: time="2025-01-13T20:35:32.358077762Z" level=info msg="StopPodSandbox for \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\" returns successfully" Jan 13 20:35:32.358334 containerd[1550]: time="2025-01-13T20:35:32.358284505Z" level=info msg="RemovePodSandbox for \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\"" Jan 13 20:35:32.358334 containerd[1550]: time="2025-01-13T20:35:32.358296276Z" level=info msg="Forcibly stopping sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\"" Jan 13 20:35:32.358810 containerd[1550]: time="2025-01-13T20:35:32.358449391Z" level=info msg="TearDown network for sandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\" successfully" Jan 13 20:35:32.359644 containerd[1550]: time="2025-01-13T20:35:32.359582512Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.359644 containerd[1550]: time="2025-01-13T20:35:32.359602321Z" level=info msg="RemovePodSandbox \"e7d118a82ca05ea6e0723ef3e54c0228b5e4a8d84693d9ba4f72bddee9be2e32\" returns successfully" Jan 13 20:35:32.359977 containerd[1550]: time="2025-01-13T20:35:32.359732407Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" Jan 13 20:35:32.359977 containerd[1550]: time="2025-01-13T20:35:32.359824313Z" level=info msg="TearDown network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" successfully" Jan 13 20:35:32.359977 containerd[1550]: time="2025-01-13T20:35:32.359831550Z" level=info msg="StopPodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" returns successfully" Jan 13 20:35:32.360464 containerd[1550]: time="2025-01-13T20:35:32.359978228Z" level=info msg="RemovePodSandbox for \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" Jan 13 20:35:32.360464 containerd[1550]: time="2025-01-13T20:35:32.359988116Z" level=info msg="Forcibly stopping sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\"" Jan 13 20:35:32.360464 containerd[1550]: time="2025-01-13T20:35:32.360062013Z" level=info msg="TearDown network for sandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" successfully" Jan 13 20:35:32.362059 containerd[1550]: time="2025-01-13T20:35:32.361077658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.362059 containerd[1550]: time="2025-01-13T20:35:32.361095345Z" level=info msg="RemovePodSandbox \"4048fd99abd464c27dea86f086507987ce938de4d7f5b7e2066e8bc2d4981465\" returns successfully" Jan 13 20:35:32.362059 containerd[1550]: time="2025-01-13T20:35:32.361227509Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\"" Jan 13 20:35:32.362059 containerd[1550]: time="2025-01-13T20:35:32.361266409Z" level=info msg="TearDown network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" successfully" Jan 13 20:35:32.362059 containerd[1550]: time="2025-01-13T20:35:32.361272267Z" level=info msg="StopPodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" returns successfully" Jan 13 20:35:32.362059 containerd[1550]: time="2025-01-13T20:35:32.361435021Z" level=info msg="RemovePodSandbox for \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\"" Jan 13 20:35:32.362059 containerd[1550]: time="2025-01-13T20:35:32.361445292Z" level=info msg="Forcibly stopping sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\"" Jan 13 20:35:32.362059 containerd[1550]: time="2025-01-13T20:35:32.361476171Z" level=info msg="TearDown network for sandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" successfully" Jan 13 20:35:32.363476 containerd[1550]: time="2025-01-13T20:35:32.362590774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.363476 containerd[1550]: time="2025-01-13T20:35:32.362607521Z" level=info msg="RemovePodSandbox \"91beb2ee78f6c1e27193df8c09fd7b15428dd2c85e1e348ba5f45a71948c652b\" returns successfully" Jan 13 20:35:32.363476 containerd[1550]: time="2025-01-13T20:35:32.362737009Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\"" Jan 13 20:35:32.363476 containerd[1550]: time="2025-01-13T20:35:32.362827617Z" level=info msg="TearDown network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" successfully" Jan 13 20:35:32.363476 containerd[1550]: time="2025-01-13T20:35:32.362834898Z" level=info msg="StopPodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" returns successfully" Jan 13 20:35:32.363476 containerd[1550]: time="2025-01-13T20:35:32.362989649Z" level=info msg="RemovePodSandbox for \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\"" Jan 13 20:35:32.363476 containerd[1550]: time="2025-01-13T20:35:32.363010626Z" level=info msg="Forcibly stopping sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\"" Jan 13 20:35:32.363476 containerd[1550]: time="2025-01-13T20:35:32.363037337Z" level=info msg="TearDown network for sandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" successfully" Jan 13 20:35:32.364088 containerd[1550]: time="2025-01-13T20:35:32.364072281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.364116 containerd[1550]: time="2025-01-13T20:35:32.364092546Z" level=info msg="RemovePodSandbox \"477010bce494b137ec8b1938994a06e8ae91daee18852c494e3d1c3b23a8ac34\" returns successfully" Jan 13 20:35:32.364332 containerd[1550]: time="2025-01-13T20:35:32.364250448Z" level=info msg="StopPodSandbox for \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\"" Jan 13 20:35:32.364332 containerd[1550]: time="2025-01-13T20:35:32.364288575Z" level=info msg="TearDown network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" successfully" Jan 13 20:35:32.364332 containerd[1550]: time="2025-01-13T20:35:32.364294296Z" level=info msg="StopPodSandbox for \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" returns successfully" Jan 13 20:35:32.364418 containerd[1550]: time="2025-01-13T20:35:32.364402975Z" level=info msg="RemovePodSandbox for \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\"" Jan 13 20:35:32.364672 containerd[1550]: time="2025-01-13T20:35:32.364418123Z" level=info msg="Forcibly stopping sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\"" Jan 13 20:35:32.364672 containerd[1550]: time="2025-01-13T20:35:32.364448546Z" level=info msg="TearDown network for sandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" successfully" Jan 13 20:35:32.365650 containerd[1550]: time="2025-01-13T20:35:32.365634165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.365676 containerd[1550]: time="2025-01-13T20:35:32.365658489Z" level=info msg="RemovePodSandbox \"6289c03ad392aa7c63d91c9ff8c69b077eed8f7623c884ee954fa10f497c1f68\" returns successfully" Jan 13 20:35:32.365813 containerd[1550]: time="2025-01-13T20:35:32.365799607Z" level=info msg="StopPodSandbox for \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\"" Jan 13 20:35:32.365897 containerd[1550]: time="2025-01-13T20:35:32.365853254Z" level=info msg="TearDown network for sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\" successfully" Jan 13 20:35:32.365924 containerd[1550]: time="2025-01-13T20:35:32.365895434Z" level=info msg="StopPodSandbox for \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\" returns successfully" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.366023607Z" level=info msg="RemovePodSandbox for \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\"" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.366034423Z" level=info msg="Forcibly stopping sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\"" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.366087950Z" level=info msg="TearDown network for sandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\" successfully" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.367264658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.367285699Z" level=info msg="RemovePodSandbox \"d7116cf24083e9f1fed17d506f9d885cea623b4670076c9ee007c8ee4624b41d\" returns successfully" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.367412215Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\"" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.367452592Z" level=info msg="TearDown network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" successfully" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.367458313Z" level=info msg="StopPodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" returns successfully" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.367567924Z" level=info msg="RemovePodSandbox for \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\"" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.367578055Z" level=info msg="Forcibly stopping sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\"" Jan 13 20:35:32.368088 containerd[1550]: time="2025-01-13T20:35:32.367607892Z" level=info msg="TearDown network for sandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" successfully" Jan 13 20:35:32.368802 containerd[1550]: time="2025-01-13T20:35:32.368787106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.368854 containerd[1550]: time="2025-01-13T20:35:32.368807980Z" level=info msg="RemovePodSandbox \"0ddb42fec9ae7506afe70e3965b5bae40899e80790fa769571601222a2d0ad90\" returns successfully" Jan 13 20:35:32.369054 containerd[1550]: time="2025-01-13T20:35:32.368990818Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\"" Jan 13 20:35:32.369168 containerd[1550]: time="2025-01-13T20:35:32.369110893Z" level=info msg="TearDown network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" successfully" Jan 13 20:35:32.369168 containerd[1550]: time="2025-01-13T20:35:32.369125927Z" level=info msg="StopPodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" returns successfully" Jan 13 20:35:32.369775 containerd[1550]: time="2025-01-13T20:35:32.369308865Z" level=info msg="RemovePodSandbox for \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\"" Jan 13 20:35:32.369775 containerd[1550]: time="2025-01-13T20:35:32.369320603Z" level=info msg="Forcibly stopping sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\"" Jan 13 20:35:32.369775 containerd[1550]: time="2025-01-13T20:35:32.369353232Z" level=info msg="TearDown network for sandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" successfully" Jan 13 20:35:32.370388 containerd[1550]: time="2025-01-13T20:35:32.370374806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.370421 containerd[1550]: time="2025-01-13T20:35:32.370396012Z" level=info msg="RemovePodSandbox \"413d8a55ffc44734c311ec70a7923993c7f9d8cf3c9c8b2d2f190d099f27b0f4\" returns successfully" Jan 13 20:35:32.370740 containerd[1550]: time="2025-01-13T20:35:32.370578357Z" level=info msg="StopPodSandbox for \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\"" Jan 13 20:35:32.370740 containerd[1550]: time="2025-01-13T20:35:32.370641179Z" level=info msg="TearDown network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" successfully" Jan 13 20:35:32.370740 containerd[1550]: time="2025-01-13T20:35:32.370647855Z" level=info msg="StopPodSandbox for \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" returns successfully" Jan 13 20:35:32.371645 containerd[1550]: time="2025-01-13T20:35:32.370888760Z" level=info msg="RemovePodSandbox for \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\"" Jan 13 20:35:32.371645 containerd[1550]: time="2025-01-13T20:35:32.370918931Z" level=info msg="Forcibly stopping sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\"" Jan 13 20:35:32.371645 containerd[1550]: time="2025-01-13T20:35:32.370952598Z" level=info msg="TearDown network for sandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" successfully" Jan 13 20:35:32.372472 containerd[1550]: time="2025-01-13T20:35:32.372053272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.372472 containerd[1550]: time="2025-01-13T20:35:32.372071131Z" level=info msg="RemovePodSandbox \"64b8979b55d67e60b3fa42a01097522f8095607f0fa960ba64a404817341c7f3\" returns successfully" Jan 13 20:35:32.372472 containerd[1550]: time="2025-01-13T20:35:32.372184585Z" level=info msg="StopPodSandbox for \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\"" Jan 13 20:35:32.372472 containerd[1550]: time="2025-01-13T20:35:32.372221900Z" level=info msg="TearDown network for sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\" successfully" Jan 13 20:35:32.372472 containerd[1550]: time="2025-01-13T20:35:32.372227824Z" level=info msg="StopPodSandbox for \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\" returns successfully" Jan 13 20:35:32.372472 containerd[1550]: time="2025-01-13T20:35:32.372355421Z" level=info msg="RemovePodSandbox for \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\"" Jan 13 20:35:32.372472 containerd[1550]: time="2025-01-13T20:35:32.372366693Z" level=info msg="Forcibly stopping sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\"" Jan 13 20:35:32.372472 containerd[1550]: time="2025-01-13T20:35:32.372436477Z" level=info msg="TearDown network for sandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\" successfully" Jan 13 20:35:32.373629 containerd[1550]: time="2025-01-13T20:35:32.373609898Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.373666 containerd[1550]: time="2025-01-13T20:35:32.373640120Z" level=info msg="RemovePodSandbox \"a1ee2322002098c5229463f9b32b44e59604c56396e2049048ed364c75cf4a46\" returns successfully" Jan 13 20:35:32.373804 containerd[1550]: time="2025-01-13T20:35:32.373795103Z" level=info msg="StopPodSandbox for \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\"" Jan 13 20:35:32.373995 containerd[1550]: time="2025-01-13T20:35:32.373908057Z" level=info msg="TearDown network for sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\" successfully" Jan 13 20:35:32.373995 containerd[1550]: time="2025-01-13T20:35:32.373915869Z" level=info msg="StopPodSandbox for \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\" returns successfully" Jan 13 20:35:32.374076 containerd[1550]: time="2025-01-13T20:35:32.374037158Z" level=info msg="RemovePodSandbox for \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\"" Jan 13 20:35:32.374101 containerd[1550]: time="2025-01-13T20:35:32.374077740Z" level=info msg="Forcibly stopping sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\"" Jan 13 20:35:32.374145 containerd[1550]: time="2025-01-13T20:35:32.374123086Z" level=info msg="TearDown network for sandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\" successfully" Jan 13 20:35:32.375101 containerd[1550]: time="2025-01-13T20:35:32.375088062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.375133 containerd[1550]: time="2025-01-13T20:35:32.375108746Z" level=info msg="RemovePodSandbox \"aa032e803c43624cecf32df0567785cac034e24357d8b965fd945c8591d00609\" returns successfully" Jan 13 20:35:32.375301 containerd[1550]: time="2025-01-13T20:35:32.375236506Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\"" Jan 13 20:35:32.375301 containerd[1550]: time="2025-01-13T20:35:32.375274937Z" level=info msg="TearDown network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" successfully" Jan 13 20:35:32.375301 containerd[1550]: time="2025-01-13T20:35:32.375280690Z" level=info msg="StopPodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" returns successfully" Jan 13 20:35:32.375508 containerd[1550]: time="2025-01-13T20:35:32.375493546Z" level=info msg="RemovePodSandbox for \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\"" Jan 13 20:35:32.375508 containerd[1550]: time="2025-01-13T20:35:32.375507187Z" level=info msg="Forcibly stopping sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\"" Jan 13 20:35:32.375556 containerd[1550]: time="2025-01-13T20:35:32.375536829Z" level=info msg="TearDown network for sandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" successfully" Jan 13 20:35:32.376517 containerd[1550]: time="2025-01-13T20:35:32.376500690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.376611 containerd[1550]: time="2025-01-13T20:35:32.376523488Z" level=info msg="RemovePodSandbox \"3bae6964dec0f6a598e7c1f8bdd0857d677ce0bb9f82bc0115d364f876b4f813\" returns successfully" Jan 13 20:35:32.376770 containerd[1550]: time="2025-01-13T20:35:32.376685512Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\"" Jan 13 20:35:32.376770 containerd[1550]: time="2025-01-13T20:35:32.376722130Z" level=info msg="TearDown network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" successfully" Jan 13 20:35:32.376770 containerd[1550]: time="2025-01-13T20:35:32.376740328Z" level=info msg="StopPodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" returns successfully" Jan 13 20:35:32.376992 containerd[1550]: time="2025-01-13T20:35:32.376929511Z" level=info msg="RemovePodSandbox for \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\"" Jan 13 20:35:32.376992 containerd[1550]: time="2025-01-13T20:35:32.376940988Z" level=info msg="Forcibly stopping sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\"" Jan 13 20:35:32.377106 containerd[1550]: time="2025-01-13T20:35:32.377060967Z" level=info msg="TearDown network for sandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" successfully" Jan 13 20:35:32.378112 containerd[1550]: time="2025-01-13T20:35:32.378100912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.378241 containerd[1550]: time="2025-01-13T20:35:32.378169476Z" level=info msg="RemovePodSandbox \"aa2f7ef657349b475bb8966433f1821ad07c1ab26d9239f27e83ff79372b28f2\" returns successfully" Jan 13 20:35:32.378350 containerd[1550]: time="2025-01-13T20:35:32.378337243Z" level=info msg="StopPodSandbox for \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\"" Jan 13 20:35:32.378430 containerd[1550]: time="2025-01-13T20:35:32.378422256Z" level=info msg="TearDown network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" successfully" Jan 13 20:35:32.378430 containerd[1550]: time="2025-01-13T20:35:32.378429777Z" level=info msg="StopPodSandbox for \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" returns successfully" Jan 13 20:35:32.378543 containerd[1550]: time="2025-01-13T20:35:32.378533064Z" level=info msg="RemovePodSandbox for \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\"" Jan 13 20:35:32.378563 containerd[1550]: time="2025-01-13T20:35:32.378553835Z" level=info msg="Forcibly stopping sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\"" Jan 13 20:35:32.378631 containerd[1550]: time="2025-01-13T20:35:32.378584961Z" level=info msg="TearDown network for sandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" successfully" Jan 13 20:35:32.379705 containerd[1550]: time="2025-01-13T20:35:32.379689505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.380527 containerd[1550]: time="2025-01-13T20:35:32.379710587Z" level=info msg="RemovePodSandbox \"74eb241c8b7978ebb56d638356c52eddb87ee77a7c6c67faa88292a074d91760\" returns successfully" Jan 13 20:35:32.380527 containerd[1550]: time="2025-01-13T20:35:32.379972993Z" level=info msg="StopPodSandbox for \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\"" Jan 13 20:35:32.380527 containerd[1550]: time="2025-01-13T20:35:32.380010746Z" level=info msg="TearDown network for sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\" successfully" Jan 13 20:35:32.380527 containerd[1550]: time="2025-01-13T20:35:32.380016773Z" level=info msg="StopPodSandbox for \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\" returns successfully" Jan 13 20:35:32.380527 containerd[1550]: time="2025-01-13T20:35:32.380131982Z" level=info msg="RemovePodSandbox for \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\"" Jan 13 20:35:32.380527 containerd[1550]: time="2025-01-13T20:35:32.380145349Z" level=info msg="Forcibly stopping sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\"" Jan 13 20:35:32.380527 containerd[1550]: time="2025-01-13T20:35:32.380184302Z" level=info msg="TearDown network for sandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\" successfully" Jan 13 20:35:32.381276 containerd[1550]: time="2025-01-13T20:35:32.381262020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.382477 containerd[1550]: time="2025-01-13T20:35:32.381282032Z" level=info msg="RemovePodSandbox \"33eef0b0a6b6610d41a6196624f2e6d087db7259b8356193bc5cb2c53a85c6a3\" returns successfully" Jan 13 20:35:32.382477 containerd[1550]: time="2025-01-13T20:35:32.381475667Z" level=info msg="StopPodSandbox for \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\"" Jan 13 20:35:32.382477 containerd[1550]: time="2025-01-13T20:35:32.381513186Z" level=info msg="TearDown network for sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\" successfully" Jan 13 20:35:32.382477 containerd[1550]: time="2025-01-13T20:35:32.381519171Z" level=info msg="StopPodSandbox for \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\" returns successfully" Jan 13 20:35:32.382477 containerd[1550]: time="2025-01-13T20:35:32.381622607Z" level=info msg="RemovePodSandbox for \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\"" Jan 13 20:35:32.382477 containerd[1550]: time="2025-01-13T20:35:32.381631722Z" level=info msg="Forcibly stopping sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\"" Jan 13 20:35:32.382477 containerd[1550]: time="2025-01-13T20:35:32.381670959Z" level=info msg="TearDown network for sandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\" successfully" Jan 13 20:35:32.383045 containerd[1550]: time="2025-01-13T20:35:32.383032716Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.383386 containerd[1550]: time="2025-01-13T20:35:32.383227138Z" level=info msg="RemovePodSandbox \"5851e6112a2f8be0cc5361301e166015a4200a57780e8f7dc299b0cf7cae905d\" returns successfully" Jan 13 20:35:32.383511 containerd[1550]: time="2025-01-13T20:35:32.383495010Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\"" Jan 13 20:35:32.383545 containerd[1550]: time="2025-01-13T20:35:32.383536215Z" level=info msg="TearDown network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" successfully" Jan 13 20:35:32.383545 containerd[1550]: time="2025-01-13T20:35:32.383542150Z" level=info msg="StopPodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" returns successfully" Jan 13 20:35:32.383682 containerd[1550]: time="2025-01-13T20:35:32.383655991Z" level=info msg="RemovePodSandbox for \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\"" Jan 13 20:35:32.383682 containerd[1550]: time="2025-01-13T20:35:32.383668215Z" level=info msg="Forcibly stopping sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\"" Jan 13 20:35:32.383756 containerd[1550]: time="2025-01-13T20:35:32.383703531Z" level=info msg="TearDown network for sandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" successfully" Jan 13 20:35:32.384818 containerd[1550]: time="2025-01-13T20:35:32.384803661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.384848 containerd[1550]: time="2025-01-13T20:35:32.384830194Z" level=info msg="RemovePodSandbox \"676bb088f70d285286a19b35a78cc204a03260db19190d6de1cd038be0a837b3\" returns successfully" Jan 13 20:35:32.385094 containerd[1550]: time="2025-01-13T20:35:32.384963976Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\"" Jan 13 20:35:32.385094 containerd[1550]: time="2025-01-13T20:35:32.385003243Z" level=info msg="TearDown network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" successfully" Jan 13 20:35:32.385094 containerd[1550]: time="2025-01-13T20:35:32.385009208Z" level=info msg="StopPodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" returns successfully" Jan 13 20:35:32.385164 containerd[1550]: time="2025-01-13T20:35:32.385129689Z" level=info msg="RemovePodSandbox for \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\"" Jan 13 20:35:32.385164 containerd[1550]: time="2025-01-13T20:35:32.385139737Z" level=info msg="Forcibly stopping sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\"" Jan 13 20:35:32.385199 containerd[1550]: time="2025-01-13T20:35:32.385168889Z" level=info msg="TearDown network for sandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" successfully" Jan 13 20:35:32.386212 containerd[1550]: time="2025-01-13T20:35:32.386197346Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.386238 containerd[1550]: time="2025-01-13T20:35:32.386219249Z" level=info msg="RemovePodSandbox \"fbdc3f433e0b6dd6987c77a5971e6d7eab07f2f6940654c7dfab40a3247c270c\" returns successfully" Jan 13 20:35:32.388703 containerd[1550]: time="2025-01-13T20:35:32.388607137Z" level=info msg="StopPodSandbox for \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\"" Jan 13 20:35:32.388703 containerd[1550]: time="2025-01-13T20:35:32.388651009Z" level=info msg="TearDown network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" successfully" Jan 13 20:35:32.388703 containerd[1550]: time="2025-01-13T20:35:32.388657282Z" level=info msg="StopPodSandbox for \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" returns successfully" Jan 13 20:35:32.388889 containerd[1550]: time="2025-01-13T20:35:32.388809140Z" level=info msg="RemovePodSandbox for \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\"" Jan 13 20:35:32.388889 containerd[1550]: time="2025-01-13T20:35:32.388820593Z" level=info msg="Forcibly stopping sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\"" Jan 13 20:35:32.389268 containerd[1550]: time="2025-01-13T20:35:32.388986511Z" level=info msg="TearDown network for sandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" successfully" Jan 13 20:35:32.390056 containerd[1550]: time="2025-01-13T20:35:32.390044830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.390113 containerd[1550]: time="2025-01-13T20:35:32.390104405Z" level=info msg="RemovePodSandbox \"aee0c8b257209e6316138d6302b91fb438a377ebb82f707483622b2f4a8fd4e3\" returns successfully" Jan 13 20:35:32.390283 containerd[1550]: time="2025-01-13T20:35:32.390273827Z" level=info msg="StopPodSandbox for \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\"" Jan 13 20:35:32.390393 containerd[1550]: time="2025-01-13T20:35:32.390358227Z" level=info msg="TearDown network for sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\" successfully" Jan 13 20:35:32.390393 containerd[1550]: time="2025-01-13T20:35:32.390366402Z" level=info msg="StopPodSandbox for \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\" returns successfully" Jan 13 20:35:32.392897 containerd[1550]: time="2025-01-13T20:35:32.392843872Z" level=info msg="RemovePodSandbox for \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\"" Jan 13 20:35:32.392897 containerd[1550]: time="2025-01-13T20:35:32.392856727Z" level=info msg="Forcibly stopping sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\"" Jan 13 20:35:32.392965 containerd[1550]: time="2025-01-13T20:35:32.392906722Z" level=info msg="TearDown network for sandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\" successfully" Jan 13 20:35:32.393951 containerd[1550]: time="2025-01-13T20:35:32.393934879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.394007 containerd[1550]: time="2025-01-13T20:35:32.393956589Z" level=info msg="RemovePodSandbox \"7f5fe6b5c0e6fbf23d7511db6d72cf492f0960126ad0ab7d9f70f747d5c523cd\" returns successfully" Jan 13 20:35:32.394257 containerd[1550]: time="2025-01-13T20:35:32.394132801Z" level=info msg="StopPodSandbox for \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\"" Jan 13 20:35:32.394257 containerd[1550]: time="2025-01-13T20:35:32.394180235Z" level=info msg="TearDown network for sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\" successfully" Jan 13 20:35:32.394257 containerd[1550]: time="2025-01-13T20:35:32.394186412Z" level=info msg="StopPodSandbox for \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\" returns successfully" Jan 13 20:35:32.394464 containerd[1550]: time="2025-01-13T20:35:32.394396464Z" level=info msg="RemovePodSandbox for \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\"" Jan 13 20:35:32.394464 containerd[1550]: time="2025-01-13T20:35:32.394408716Z" level=info msg="Forcibly stopping sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\"" Jan 13 20:35:32.394616 containerd[1550]: time="2025-01-13T20:35:32.394532268Z" level=info msg="TearDown network for sandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\" successfully" Jan 13 20:35:32.395806 containerd[1550]: time="2025-01-13T20:35:32.395729251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.395806 containerd[1550]: time="2025-01-13T20:35:32.395748279Z" level=info msg="RemovePodSandbox \"d7128d8e7d67fd9bdc24394ae74f0f78b1f21eb241e6535f4643a15656dc0886\" returns successfully" Jan 13 20:35:32.395904 containerd[1550]: time="2025-01-13T20:35:32.395887940Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\"" Jan 13 20:35:32.395929 containerd[1550]: time="2025-01-13T20:35:32.395925046Z" level=info msg="TearDown network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" successfully" Jan 13 20:35:32.395957 containerd[1550]: time="2025-01-13T20:35:32.395931065Z" level=info msg="StopPodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" returns successfully" Jan 13 20:35:32.396121 containerd[1550]: time="2025-01-13T20:35:32.396108566Z" level=info msg="RemovePodSandbox for \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\"" Jan 13 20:35:32.396121 containerd[1550]: time="2025-01-13T20:35:32.396121058Z" level=info msg="Forcibly stopping sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\"" Jan 13 20:35:32.396173 containerd[1550]: time="2025-01-13T20:35:32.396150784Z" level=info msg="TearDown network for sandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" successfully" Jan 13 20:35:32.397140 containerd[1550]: time="2025-01-13T20:35:32.397126344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.397168 containerd[1550]: time="2025-01-13T20:35:32.397149853Z" level=info msg="RemovePodSandbox \"ed6c3e614de222d84118343b30435d7438cdcf91c327412dce7e52b66de3873f\" returns successfully" Jan 13 20:35:32.397370 containerd[1550]: time="2025-01-13T20:35:32.397301118Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\"" Jan 13 20:35:32.397370 containerd[1550]: time="2025-01-13T20:35:32.397340268Z" level=info msg="TearDown network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" successfully" Jan 13 20:35:32.397370 containerd[1550]: time="2025-01-13T20:35:32.397347593Z" level=info msg="StopPodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" returns successfully" Jan 13 20:35:32.397634 containerd[1550]: time="2025-01-13T20:35:32.397544759Z" level=info msg="RemovePodSandbox for \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\"" Jan 13 20:35:32.397634 containerd[1550]: time="2025-01-13T20:35:32.397573484Z" level=info msg="Forcibly stopping sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\"" Jan 13 20:35:32.397634 containerd[1550]: time="2025-01-13T20:35:32.397603242Z" level=info msg="TearDown network for sandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" successfully" Jan 13 20:35:32.405102 containerd[1550]: time="2025-01-13T20:35:32.404954455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.405102 containerd[1550]: time="2025-01-13T20:35:32.404982192Z" level=info msg="RemovePodSandbox \"b8552890157c729b5d29e6166a59bb0f8d36226f04b18b3dd5f7192bf3219e8b\" returns successfully" Jan 13 20:35:32.405796 containerd[1550]: time="2025-01-13T20:35:32.405494297Z" level=info msg="StopPodSandbox for \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\"" Jan 13 20:35:32.405796 containerd[1550]: time="2025-01-13T20:35:32.405538799Z" level=info msg="TearDown network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" successfully" Jan 13 20:35:32.405796 containerd[1550]: time="2025-01-13T20:35:32.405560012Z" level=info msg="StopPodSandbox for \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" returns successfully" Jan 13 20:35:32.406249 containerd[1550]: time="2025-01-13T20:35:32.406212706Z" level=info msg="RemovePodSandbox for \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\"" Jan 13 20:35:32.406249 containerd[1550]: time="2025-01-13T20:35:32.406224847Z" level=info msg="Forcibly stopping sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\"" Jan 13 20:35:32.406430 containerd[1550]: time="2025-01-13T20:35:32.406384634Z" level=info msg="TearDown network for sandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" successfully" Jan 13 20:35:32.407616 containerd[1550]: time="2025-01-13T20:35:32.407496188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.407616 containerd[1550]: time="2025-01-13T20:35:32.407515385Z" level=info msg="RemovePodSandbox \"a6a3933d826a1ed33bd1ba0ffaf585deb0cd5abb0e3cdcb401169007ea900810\" returns successfully" Jan 13 20:35:32.407788 containerd[1550]: time="2025-01-13T20:35:32.407704659Z" level=info msg="StopPodSandbox for \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\"" Jan 13 20:35:32.407788 containerd[1550]: time="2025-01-13T20:35:32.407742579Z" level=info msg="TearDown network for sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\" successfully" Jan 13 20:35:32.407788 containerd[1550]: time="2025-01-13T20:35:32.407748263Z" level=info msg="StopPodSandbox for \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\" returns successfully" Jan 13 20:35:32.407982 containerd[1550]: time="2025-01-13T20:35:32.407973387Z" level=info msg="RemovePodSandbox for \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\"" Jan 13 20:35:32.408115 containerd[1550]: time="2025-01-13T20:35:32.408047456Z" level=info msg="Forcibly stopping sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\"" Jan 13 20:35:32.408172 containerd[1550]: time="2025-01-13T20:35:32.408078878Z" level=info msg="TearDown network for sandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\" successfully" Jan 13 20:35:32.409220 containerd[1550]: time="2025-01-13T20:35:32.409190356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.409333 containerd[1550]: time="2025-01-13T20:35:32.409256558Z" level=info msg="RemovePodSandbox \"7160af99eb24f0c55883e3ab5e4d64824ed95bb409a005a2351cdc4eff712a24\" returns successfully" Jan 13 20:35:32.409414 containerd[1550]: time="2025-01-13T20:35:32.409385087Z" level=info msg="StopPodSandbox for \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\"" Jan 13 20:35:32.409442 containerd[1550]: time="2025-01-13T20:35:32.409430674Z" level=info msg="TearDown network for sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\" successfully" Jan 13 20:35:32.409442 containerd[1550]: time="2025-01-13T20:35:32.409436702Z" level=info msg="StopPodSandbox for \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\" returns successfully" Jan 13 20:35:32.409792 containerd[1550]: time="2025-01-13T20:35:32.409590629Z" level=info msg="RemovePodSandbox for \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\"" Jan 13 20:35:32.409792 containerd[1550]: time="2025-01-13T20:35:32.409602731Z" level=info msg="Forcibly stopping sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\"" Jan 13 20:35:32.409792 containerd[1550]: time="2025-01-13T20:35:32.409632588Z" level=info msg="TearDown network for sandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\" successfully" Jan 13 20:35:32.410736 containerd[1550]: time="2025-01-13T20:35:32.410724812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:32.410949 containerd[1550]: time="2025-01-13T20:35:32.410808799Z" level=info msg="RemovePodSandbox \"3d37e861359de5eae910215214dbf6fb4e0f7fa08e4196a9639fac115181ddd9\" returns successfully" Jan 13 20:35:44.571950 systemd[1]: Started sshd@7-139.178.70.110:22-147.75.109.163:46330.service - OpenSSH per-connection server daemon (147.75.109.163:46330). Jan 13 20:35:44.708903 sshd[5682]: Accepted publickey for core from 147.75.109.163 port 46330 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:35:44.711318 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:44.714717 systemd-logind[1524]: New session 10 of user core. Jan 13 20:35:44.718837 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:35:45.224130 sshd[5684]: Connection closed by 147.75.109.163 port 46330 Jan 13 20:35:45.224647 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:45.226952 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:35:45.227547 systemd[1]: sshd@7-139.178.70.110:22-147.75.109.163:46330.service: Deactivated successfully. Jan 13 20:35:45.230917 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:35:45.232172 systemd-logind[1524]: Removed session 10. Jan 13 20:35:50.233753 systemd[1]: Started sshd@8-139.178.70.110:22-147.75.109.163:45202.service - OpenSSH per-connection server daemon (147.75.109.163:45202). Jan 13 20:35:50.294259 sshd[5718]: Accepted publickey for core from 147.75.109.163 port 45202 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:35:50.295388 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:50.299152 systemd-logind[1524]: New session 11 of user core. Jan 13 20:35:50.302086 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:35:50.440597 sshd[5720]: Connection closed by 147.75.109.163 port 45202 Jan 13 20:35:50.440986 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:50.442961 systemd[1]: sshd@8-139.178.70.110:22-147.75.109.163:45202.service: Deactivated successfully. Jan 13 20:35:50.444019 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:35:50.444453 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:35:50.445206 systemd-logind[1524]: Removed session 11. Jan 13 20:35:55.450302 systemd[1]: Started sshd@9-139.178.70.110:22-147.75.109.163:45216.service - OpenSSH per-connection server daemon (147.75.109.163:45216). Jan 13 20:35:55.492992 sshd[5731]: Accepted publickey for core from 147.75.109.163 port 45216 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:35:55.493964 sshd-session[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:55.497563 systemd-logind[1524]: New session 12 of user core. Jan 13 20:35:55.507957 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:35:55.602933 sshd[5733]: Connection closed by 147.75.109.163 port 45216 Jan 13 20:35:55.603417 sshd-session[5731]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:55.609529 systemd[1]: sshd@9-139.178.70.110:22-147.75.109.163:45216.service: Deactivated successfully. Jan 13 20:35:55.611168 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:35:55.611668 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:35:55.615994 systemd[1]: Started sshd@10-139.178.70.110:22-147.75.109.163:45220.service - OpenSSH per-connection server daemon (147.75.109.163:45220). Jan 13 20:35:55.617036 systemd-logind[1524]: Removed session 12. Jan 13 20:35:55.644324 sshd[5745]: Accepted publickey for core from 147.75.109.163 port 45220 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:35:55.645208 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:55.648461 systemd-logind[1524]: New session 13 of user core. Jan 13 20:35:55.653861 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:35:55.799257 sshd[5747]: Connection closed by 147.75.109.163 port 45220 Jan 13 20:35:55.800353 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:55.805822 systemd[1]: sshd@10-139.178.70.110:22-147.75.109.163:45220.service: Deactivated successfully. Jan 13 20:35:55.807883 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:35:55.812462 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:35:55.819595 systemd[1]: Started sshd@11-139.178.70.110:22-147.75.109.163:45236.service - OpenSSH per-connection server daemon (147.75.109.163:45236). Jan 13 20:35:55.825312 systemd-logind[1524]: Removed session 13. Jan 13 20:35:55.895288 sshd[5756]: Accepted publickey for core from 147.75.109.163 port 45236 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:35:55.896335 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:55.899974 systemd-logind[1524]: New session 14 of user core. Jan 13 20:35:55.905871 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:35:56.007474 sshd[5758]: Connection closed by 147.75.109.163 port 45236 Jan 13 20:35:56.007917 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:56.010280 systemd[1]: sshd@11-139.178.70.110:22-147.75.109.163:45236.service: Deactivated successfully. Jan 13 20:35:56.011526 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:35:56.012087 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:35:56.012717 systemd-logind[1524]: Removed session 14. Jan 13 20:36:01.015969 systemd[1]: Started sshd@12-139.178.70.110:22-147.75.109.163:50968.service - OpenSSH per-connection server daemon (147.75.109.163:50968). Jan 13 20:36:01.049156 sshd[5773]: Accepted publickey for core from 147.75.109.163 port 50968 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:01.050128 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:01.053260 systemd-logind[1524]: New session 15 of user core. Jan 13 20:36:01.062938 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:36:01.154076 sshd[5776]: Connection closed by 147.75.109.163 port 50968 Jan 13 20:36:01.154320 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:01.157225 systemd[1]: sshd@12-139.178.70.110:22-147.75.109.163:50968.service: Deactivated successfully. Jan 13 20:36:01.158554 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:36:01.159136 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:36:01.159716 systemd-logind[1524]: Removed session 15. Jan 13 20:36:06.163663 systemd[1]: Started sshd@13-139.178.70.110:22-147.75.109.163:50982.service - OpenSSH per-connection server daemon (147.75.109.163:50982). Jan 13 20:36:06.197682 sshd[5789]: Accepted publickey for core from 147.75.109.163 port 50982 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:06.198602 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:06.201604 systemd-logind[1524]: New session 16 of user core. Jan 13 20:36:06.204886 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:36:06.346662 sshd[5791]: Connection closed by 147.75.109.163 port 50982 Jan 13 20:36:06.349593 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:06.354602 systemd[1]: sshd@13-139.178.70.110:22-147.75.109.163:50982.service: Deactivated successfully. Jan 13 20:36:06.357480 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:36:06.358506 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:36:06.363241 systemd[1]: Started sshd@14-139.178.70.110:22-147.75.109.163:50994.service - OpenSSH per-connection server daemon (147.75.109.163:50994). Jan 13 20:36:06.364571 systemd-logind[1524]: Removed session 16. Jan 13 20:36:06.563555 sshd[5802]: Accepted publickey for core from 147.75.109.163 port 50994 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:06.574033 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:06.577856 systemd-logind[1524]: New session 17 of user core. Jan 13 20:36:06.581842 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:36:07.712370 sshd[5804]: Connection closed by 147.75.109.163 port 50994 Jan 13 20:36:07.722201 systemd[1]: Started sshd@15-139.178.70.110:22-147.75.109.163:53060.service - OpenSSH per-connection server daemon (147.75.109.163:53060). Jan 13 20:36:07.734644 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:07.740045 systemd[1]: sshd@14-139.178.70.110:22-147.75.109.163:50994.service: Deactivated successfully. Jan 13 20:36:07.741789 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:36:07.742974 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:36:07.744216 systemd-logind[1524]: Removed session 17. Jan 13 20:36:07.920014 sshd[5811]: Accepted publickey for core from 147.75.109.163 port 53060 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:07.925856 sshd-session[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:07.933793 systemd-logind[1524]: New session 18 of user core. Jan 13 20:36:07.952891 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:36:09.983047 sshd[5815]: Connection closed by 147.75.109.163 port 53060 Jan 13 20:36:09.981991 sshd-session[5811]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:09.992324 systemd[1]: Started sshd@16-139.178.70.110:22-147.75.109.163:53070.service - OpenSSH per-connection server daemon (147.75.109.163:53070). Jan 13 20:36:10.065724 systemd[1]: sshd@15-139.178.70.110:22-147.75.109.163:53060.service: Deactivated successfully. Jan 13 20:36:10.067657 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:36:10.073290 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:36:10.074158 systemd-logind[1524]: Removed session 18. Jan 13 20:36:10.196590 sshd[5829]: Accepted publickey for core from 147.75.109.163 port 53070 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:10.206575 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:10.210805 systemd-logind[1524]: New session 19 of user core. Jan 13 20:36:10.218865 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:36:10.769278 sshd[5836]: Connection closed by 147.75.109.163 port 53070 Jan 13 20:36:10.770171 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:10.777154 systemd[1]: sshd@16-139.178.70.110:22-147.75.109.163:53070.service: Deactivated successfully. Jan 13 20:36:10.778753 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:36:10.780018 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:36:10.784046 systemd[1]: Started sshd@17-139.178.70.110:22-147.75.109.163:53074.service - OpenSSH per-connection server daemon (147.75.109.163:53074). Jan 13 20:36:10.786428 systemd-logind[1524]: Removed session 19. Jan 13 20:36:10.818783 sshd[5845]: Accepted publickey for core from 147.75.109.163 port 53074 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:10.819551 sshd-session[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:10.821917 systemd-logind[1524]: New session 20 of user core. Jan 13 20:36:10.826836 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:36:10.946795 sshd[5847]: Connection closed by 147.75.109.163 port 53074 Jan 13 20:36:10.946661 sshd-session[5845]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:10.948983 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:36:10.949469 systemd[1]: sshd@17-139.178.70.110:22-147.75.109.163:53074.service: Deactivated successfully. Jan 13 20:36:10.951644 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:36:10.953297 systemd-logind[1524]: Removed session 20. Jan 13 20:36:15.956878 systemd[1]: Started sshd@18-139.178.70.110:22-147.75.109.163:53076.service - OpenSSH per-connection server daemon (147.75.109.163:53076). Jan 13 20:36:16.114487 sshd[5878]: Accepted publickey for core from 147.75.109.163 port 53076 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:16.116167 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:16.120161 systemd-logind[1524]: New session 21 of user core. Jan 13 20:36:16.124954 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:36:16.492020 sshd[5880]: Connection closed by 147.75.109.163 port 53076 Jan 13 20:36:16.491912 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:16.493753 systemd-logind[1524]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:36:16.493877 systemd[1]: sshd@18-139.178.70.110:22-147.75.109.163:53076.service: Deactivated successfully. Jan 13 20:36:16.494984 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:36:16.495891 systemd-logind[1524]: Removed session 21. Jan 13 20:36:21.503409 systemd[1]: Started sshd@19-139.178.70.110:22-147.75.109.163:45198.service - OpenSSH per-connection server daemon (147.75.109.163:45198). Jan 13 20:36:21.579390 sshd[5917]: Accepted publickey for core from 147.75.109.163 port 45198 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:21.583641 sshd-session[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:21.586322 systemd-logind[1524]: New session 22 of user core. Jan 13 20:36:21.597933 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:36:21.759831 sshd[5919]: Connection closed by 147.75.109.163 port 45198 Jan 13 20:36:21.760516 sshd-session[5917]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:21.762528 systemd[1]: sshd@19-139.178.70.110:22-147.75.109.163:45198.service: Deactivated successfully. Jan 13 20:36:21.763616 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:36:21.764048 systemd-logind[1524]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:36:21.764618 systemd-logind[1524]: Removed session 22. Jan 13 20:36:26.777039 systemd[1]: Started sshd@20-139.178.70.110:22-147.75.109.163:45212.service - OpenSSH per-connection server daemon (147.75.109.163:45212). Jan 13 20:36:26.854391 sshd[5938]: Accepted publickey for core from 147.75.109.163 port 45212 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:26.855460 sshd-session[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:26.859259 systemd-logind[1524]: New session 23 of user core. Jan 13 20:36:26.865021 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:36:27.060708 sshd[5940]: Connection closed by 147.75.109.163 port 45212 Jan 13 20:36:27.061186 sshd-session[5938]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:27.063521 systemd-logind[1524]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:36:27.063744 systemd[1]: sshd@20-139.178.70.110:22-147.75.109.163:45212.service: Deactivated successfully. Jan 13 20:36:27.065157 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:36:27.066432 systemd-logind[1524]: Removed session 23. Jan 13 20:36:32.071108 systemd[1]: Started sshd@21-139.178.70.110:22-147.75.109.163:33068.service - OpenSSH per-connection server daemon (147.75.109.163:33068). Jan 13 20:36:32.168176 sshd[5973]: Accepted publickey for core from 147.75.109.163 port 33068 ssh2: RSA SHA256:EKWTIpqKlhRTY8gr0WBQ1SLflvK+W4bCLRDtmnb42Eo Jan 13 20:36:32.173754 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:32.177241 systemd-logind[1524]: New session 24 of user core. Jan 13 20:36:32.182908 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:36:32.555231 sshd[5975]: Connection closed by 147.75.109.163 port 33068 Jan 13 20:36:32.555974 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:32.558669 systemd-logind[1524]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:36:32.558803 systemd[1]: sshd@21-139.178.70.110:22-147.75.109.163:33068.service: Deactivated successfully. Jan 13 20:36:32.560122 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:36:32.561013 systemd-logind[1524]: Removed session 24.