Jan 13 21:16:48.745557 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:16:48.745577 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:16:48.745587 kernel: Disabled fast string operations Jan 13 21:16:48.745593 kernel: BIOS-provided physical RAM map: Jan 13 21:16:48.745597 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 13 21:16:48.745601 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 13 21:16:48.745608 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 13 21:16:48.745612 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 13 21:16:48.745616 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 13 21:16:48.745620 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 13 21:16:48.745624 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 13 21:16:48.745629 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 13 21:16:48.745633 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 13 21:16:48.745637 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 21:16:48.745643 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 13 21:16:48.745648 kernel: NX (Execute Disable) protection: active Jan 13 21:16:48.745653 kernel: APIC: Static calls initialized Jan 13 21:16:48.745658 kernel: SMBIOS 2.7 present. Jan 13 21:16:48.745663 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 13 21:16:48.745667 kernel: vmware: hypercall mode: 0x00 Jan 13 21:16:48.745673 kernel: Hypervisor detected: VMware Jan 13 21:16:48.745681 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 13 21:16:48.745688 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 13 21:16:48.745695 kernel: vmware: using clock offset of 4504381093 ns Jan 13 21:16:48.745701 kernel: tsc: Detected 3408.000 MHz processor Jan 13 21:16:48.745706 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:16:48.745712 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:16:48.745717 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 13 21:16:48.745721 kernel: total RAM covered: 3072M Jan 13 21:16:48.745726 kernel: Found optimal setting for mtrr clean up Jan 13 21:16:48.745733 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 13 21:16:48.745741 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 13 21:16:48.745749 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:16:48.745757 kernel: Using GB pages for direct mapping Jan 13 21:16:48.745765 kernel: ACPI: Early table checksum verification disabled Jan 13 21:16:48.745770 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 13 21:16:48.745775 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 13 21:16:48.745780 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 13 21:16:48.745785 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 13 21:16:48.745793 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 13 21:16:48.745806 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 13 21:16:48.745813 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 13 21:16:48.745818 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 13 21:16:48.745827 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 13 21:16:48.745833 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 13 21:16:48.745840 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 13 21:16:48.745849 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 13 21:16:48.745855 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 13 21:16:48.745863 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 13 21:16:48.745871 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 13 21:16:48.745876 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 13 21:16:48.745881 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 13 21:16:48.745886 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 13 21:16:48.745892 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 13 21:16:48.745897 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 13 21:16:48.745905 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 13 21:16:48.745911 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 13 21:16:48.745916 kernel: system APIC only can use physical flat Jan 13 21:16:48.745922 kernel: APIC: Switched APIC routing to: physical flat Jan 13 21:16:48.745927 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:16:48.745932 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 13 21:16:48.745937 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 13 21:16:48.745942 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 13 21:16:48.745951 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 13 21:16:48.745958 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 13 21:16:48.745963 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 13 21:16:48.745968 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 13 21:16:48.745973 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 13 21:16:48.745981 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 13 21:16:48.745988 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 13 21:16:48.745995 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 13 21:16:48.746004 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 13 21:16:48.746012 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 13 21:16:48.746020 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 13 21:16:48.746027 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 13 21:16:48.746032 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 13 21:16:48.746037 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 13 21:16:48.746042 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 13 21:16:48.746047 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 13 21:16:48.746052 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 13 21:16:48.746057 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 13 21:16:48.746062 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 13 21:16:48.746068 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 13 21:16:48.746074 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 13 21:16:48.746081 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 13 21:16:48.746086 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 13 21:16:48.746091 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 13 21:16:48.746096 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 13 21:16:48.746101 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 13 21:16:48.746109 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 13 21:16:48.746118 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 13 21:16:48.746127 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 13 21:16:48.746132 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 13 21:16:48.746138 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 13 21:16:48.746148 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 13 21:16:48.746153 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 13 21:16:48.746160 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 13 21:16:48.746166 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 13 21:16:48.746172 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 13 21:16:48.746177 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 13 21:16:48.746182 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 13 21:16:48.746187 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 13 21:16:48.746192 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 13 21:16:48.746197 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 13 21:16:48.746203 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 13 21:16:48.746208 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 13 21:16:48.746213 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 13 21:16:48.746218 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 13 21:16:48.746223 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 13 21:16:48.746228 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 13 21:16:48.746233 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 13 21:16:48.746238 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 13 21:16:48.746243 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 13 21:16:48.746248 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 13 21:16:48.746254 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 13 21:16:48.746259 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 13 21:16:48.746264 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 13 21:16:48.746270 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 13 21:16:48.746279 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 13 21:16:48.746288 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 13 21:16:48.746293 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 13 21:16:48.746300 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 13 21:16:48.746308 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 13 21:16:48.746319 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 13 21:16:48.746329 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 13 21:16:48.746336 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 13 21:16:48.746344 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 13 21:16:48.746349 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 13 21:16:48.746355 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 13 21:16:48.746360 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 13 21:16:48.746368 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 13 21:16:48.746377 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 13 21:16:48.746387 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 13 21:16:48.746392 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 13 21:16:48.746398 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 13 21:16:48.746403 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 13 21:16:48.746409 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 13 21:16:48.746414 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 13 21:16:48.746419 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 13 21:16:48.746428 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 13 21:16:48.746435 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 13 21:16:48.746440 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 13 21:16:48.746447 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 13 21:16:48.746454 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 13 21:16:48.746463 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 13 21:16:48.746494 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 13 21:16:48.746505 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 13 21:16:48.746511 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 13 21:16:48.746517 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 13 21:16:48.746522 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 13 21:16:48.746527 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 13 21:16:48.746533 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 13 21:16:48.746542 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 13 21:16:48.746548 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 13 21:16:48.746554 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 13 21:16:48.746559 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 13 21:16:48.746565 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 13 21:16:48.746570 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 13 21:16:48.746575 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 13 21:16:48.746582 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 13 21:16:48.746590 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 13 21:16:48.746595 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 13 21:16:48.746602 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 13 21:16:48.746608 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 13 21:16:48.746614 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 13 21:16:48.746622 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 13 21:16:48.746631 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 13 21:16:48.746641 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 13 21:16:48.746651 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 13 21:16:48.746657 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 13 21:16:48.746663 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 13 21:16:48.746670 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 13 21:16:48.746682 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 13 21:16:48.746687 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 13 21:16:48.746693 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 13 21:16:48.746698 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 13 21:16:48.746704 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 13 21:16:48.746709 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 13 21:16:48.746714 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 13 21:16:48.746720 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 13 21:16:48.746725 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 13 21:16:48.746730 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 13 21:16:48.746737 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 13 21:16:48.746743 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 13 21:16:48.746748 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 13 21:16:48.746753 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 13 21:16:48.746759 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 13 21:16:48.746768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:16:48.746774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 13 21:16:48.746782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 13 21:16:48.746789 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 13 21:16:48.746796 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 13 21:16:48.746802 kernel: Zone ranges: Jan 13 21:16:48.746807 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:16:48.746813 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 13 21:16:48.746818 kernel: Normal empty Jan 13 21:16:48.746824 kernel: Movable zone start for each node Jan 13 21:16:48.746831 kernel: Early memory node ranges Jan 13 21:16:48.746837 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 13 21:16:48.746846 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 13 21:16:48.746855 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 13 21:16:48.746866 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 13 21:16:48.746872 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:16:48.746880 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 13 21:16:48.746886 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 13 21:16:48.746891 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 13 21:16:48.746897 kernel: system APIC only can use physical flat Jan 13 21:16:48.746902 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 13 21:16:48.746908 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 21:16:48.746913 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 21:16:48.746920 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 21:16:48.746926 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 21:16:48.746935 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 21:16:48.746942 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 21:16:48.746951 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 21:16:48.746959 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 21:16:48.746967 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 21:16:48.746974 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 21:16:48.746979 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 21:16:48.746985 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 21:16:48.746992 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 21:16:48.746997 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 21:16:48.747005 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 21:16:48.747011 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 21:16:48.747016 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 13 21:16:48.747021 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 13 21:16:48.747027 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 13 21:16:48.747033 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 13 21:16:48.747038 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 13 21:16:48.747048 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 13 21:16:48.747054 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 13 21:16:48.747059 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 13 21:16:48.747065 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 13 21:16:48.747070 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 13 21:16:48.747078 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 13 21:16:48.747085 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 13 21:16:48.747094 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 13 21:16:48.747103 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 13 21:16:48.747112 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 13 21:16:48.747119 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 13 21:16:48.747125 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 13 21:16:48.747130 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 13 21:16:48.747136 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 13 21:16:48.747141 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 13 21:16:48.747148 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 13 21:16:48.747155 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 13 21:16:48.747160 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 13 21:16:48.747166 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 13 21:16:48.747171 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 13 21:16:48.747178 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 13 21:16:48.747184 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 13 21:16:48.747194 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 13 21:16:48.747203 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 13 21:16:48.747212 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 13 21:16:48.747219 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 13 21:16:48.747226 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 13 21:16:48.747232 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 13 21:16:48.747241 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 13 21:16:48.747248 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 13 21:16:48.747254 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 13 21:16:48.747259 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 13 21:16:48.747265 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 13 21:16:48.747270 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 13 21:16:48.747275 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 13 21:16:48.747281 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 13 21:16:48.747286 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 13 21:16:48.747292 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 13 21:16:48.747297 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 13 21:16:48.747304 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 13 21:16:48.747309 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 13 21:16:48.747314 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 13 21:16:48.747320 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 13 21:16:48.747325 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 13 21:16:48.747330 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 13 21:16:48.747336 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 13 21:16:48.747341 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 13 21:16:48.747346 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 13 21:16:48.747351 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 13 21:16:48.747360 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 13 21:16:48.747366 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 13 21:16:48.747372 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 13 21:16:48.747380 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 13 21:16:48.747388 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 13 21:16:48.747398 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 13 21:16:48.747407 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 13 21:16:48.747417 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 13 21:16:48.747428 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 13 21:16:48.747436 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 13 21:16:48.747442 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 13 21:16:48.747447 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 13 21:16:48.747453 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 13 21:16:48.747459 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 13 21:16:48.747472 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 13 21:16:48.747479 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 13 21:16:48.747486 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 13 21:16:48.747494 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 13 21:16:48.747499 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 13 21:16:48.747506 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 13 21:16:48.747512 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 13 21:16:48.747517 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 13 21:16:48.747523 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 13 21:16:48.747530 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 13 21:16:48.747538 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 13 21:16:48.747547 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 13 21:16:48.747552 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 13 21:16:48.747558 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 13 21:16:48.747563 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 13 21:16:48.747570 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 13 21:16:48.747575 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 13 21:16:48.747582 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 13 21:16:48.747589 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 13 21:16:48.747594 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 13 21:16:48.747600 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 13 21:16:48.747605 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 13 21:16:48.747610 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 13 21:16:48.747616 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 13 21:16:48.747625 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 13 21:16:48.747632 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 13 21:16:48.747637 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 13 21:16:48.747643 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 13 21:16:48.747648 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 13 21:16:48.747657 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 13 21:16:48.747665 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 13 21:16:48.747672 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 13 21:16:48.747682 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 13 21:16:48.747691 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 13 21:16:48.747698 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 13 21:16:48.747704 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 13 21:16:48.747712 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 13 21:16:48.747721 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 13 21:16:48.747727 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 13 21:16:48.747732 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 13 21:16:48.747738 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 13 21:16:48.747744 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 13 21:16:48.747749 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 13 21:16:48.747754 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:16:48.747761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 13 21:16:48.747767 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:16:48.747772 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 13 21:16:48.747778 kernel: TSC deadline timer available Jan 13 21:16:48.747783 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 13 21:16:48.747789 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 13 21:16:48.747809 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 13 21:16:48.747819 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:16:48.747825 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 13 21:16:48.747833 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 21:16:48.747838 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 21:16:48.747844 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 13 21:16:48.747849 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 13 21:16:48.747854 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 13 21:16:48.747860 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 13 21:16:48.747866 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 13 21:16:48.747884 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 13 21:16:48.747896 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 13 21:16:48.747906 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 13 21:16:48.747915 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 13 21:16:48.747922 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 13 21:16:48.747927 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 13 21:16:48.747933 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 13 21:16:48.747938 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 13 21:16:48.747944 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 13 21:16:48.747950 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 13 21:16:48.747957 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 13 21:16:48.747967 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:16:48.747976 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:16:48.747984 kernel: random: crng init done Jan 13 21:16:48.747991 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 13 21:16:48.748001 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 13 21:16:48.748009 kernel: printk: log_buf_len min size: 262144 bytes Jan 13 21:16:48.748015 kernel: printk: log_buf_len: 1048576 bytes Jan 13 21:16:48.748023 kernel: printk: early log buf free: 239648(91%) Jan 13 21:16:48.748029 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:16:48.748035 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:16:48.748041 kernel: Fallback order for Node 0: 0 Jan 13 21:16:48.748049 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 13 21:16:48.748055 kernel: Policy zone: DMA32 Jan 13 21:16:48.748061 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:16:48.748067 kernel: Memory: 1936376K/2096628K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 159992K reserved, 0K cma-reserved) Jan 13 21:16:48.748075 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 13 21:16:48.748081 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:16:48.748090 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:16:48.748098 kernel: Dynamic Preempt: voluntary Jan 13 21:16:48.748104 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:16:48.748110 kernel: rcu: RCU event tracing is enabled. Jan 13 21:16:48.748116 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 13 21:16:48.748127 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:16:48.748136 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:16:48.748146 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:16:48.748156 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:16:48.748162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 13 21:16:48.748167 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 13 21:16:48.748173 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 13 21:16:48.748179 kernel: Console: colour VGA+ 80x25 Jan 13 21:16:48.748185 kernel: printk: console [tty0] enabled Jan 13 21:16:48.748191 kernel: printk: console [ttyS0] enabled Jan 13 21:16:48.748201 kernel: ACPI: Core revision 20230628 Jan 13 21:16:48.748207 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 13 21:16:48.748213 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:16:48.748219 kernel: x2apic enabled Jan 13 21:16:48.748225 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:16:48.748234 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:16:48.748243 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 13 21:16:48.748253 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 13 21:16:48.748262 kernel: Disabled fast string operations Jan 13 21:16:48.748273 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 21:16:48.748279 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 21:16:48.748288 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:16:48.748294 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:16:48.748300 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:16:48.748307 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 21:16:48.748312 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:16:48.748318 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 21:16:48.748324 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 21:16:48.748332 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:16:48.748338 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:16:48.748343 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:16:48.748349 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 13 21:16:48.748355 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 21:16:48.748361 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:16:48.748367 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:16:48.748373 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:16:48.748379 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:16:48.748386 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:16:48.748392 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:16:48.748398 kernel: pid_max: default: 131072 minimum: 1024 Jan 13 21:16:48.748406 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:16:48.748412 kernel: landlock: Up and running. Jan 13 21:16:48.748422 kernel: SELinux: Initializing. Jan 13 21:16:48.748428 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:16:48.748438 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:16:48.748450 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 21:16:48.748462 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 21:16:48.748483 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 21:16:48.748495 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 21:16:48.748501 kernel: Performance Events: Skylake events, core PMU driver. Jan 13 21:16:48.748508 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 13 21:16:48.748515 kernel: core: CPUID marked event: 'instructions' unavailable Jan 13 21:16:48.748521 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 13 21:16:48.748526 kernel: core: CPUID marked event: 'cache references' unavailable Jan 13 21:16:48.748535 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 13 21:16:48.748540 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 13 21:16:48.748550 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 13 21:16:48.748556 kernel: ... version: 1 Jan 13 21:16:48.748562 kernel: ... bit width: 48 Jan 13 21:16:48.748567 kernel: ... generic registers: 4 Jan 13 21:16:48.748573 kernel: ... value mask: 0000ffffffffffff Jan 13 21:16:48.748581 kernel: ... max period: 000000007fffffff Jan 13 21:16:48.748591 kernel: ... fixed-purpose events: 0 Jan 13 21:16:48.748600 kernel: ... event mask: 000000000000000f Jan 13 21:16:48.748609 kernel: signal: max sigframe size: 1776 Jan 13 21:16:48.748619 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:16:48.748626 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:16:48.748632 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:16:48.748637 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:16:48.748643 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:16:48.748649 kernel: .... node #0, CPUs: #1 Jan 13 21:16:48.748655 kernel: Disabled fast string operations Jan 13 21:16:48.748665 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 13 21:16:48.748671 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 13 21:16:48.748677 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:16:48.748683 kernel: smpboot: Max logical packages: 128 Jan 13 21:16:48.748689 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 13 21:16:48.748694 kernel: devtmpfs: initialized Jan 13 21:16:48.748704 kernel: x86/mm: Memory block size: 128MB Jan 13 21:16:48.748710 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 13 21:16:48.748716 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:16:48.748722 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 13 21:16:48.748731 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:16:48.748740 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:16:48.748748 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:16:48.748759 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:16:48.748769 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:16:48.748775 kernel: audit: type=2000 audit(1736803006.070:1): state=initialized audit_enabled=0 res=1 Jan 13 21:16:48.748781 kernel: cpuidle: using governor menu Jan 13 21:16:48.748792 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 13 21:16:48.748802 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:16:48.748810 kernel: dca service started, version 1.12.1 Jan 13 21:16:48.748815 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 13 21:16:48.748822 kernel: PCI: Using configuration type 1 for base access Jan 13 21:16:48.748829 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:16:48.748834 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:16:48.748840 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:16:48.748846 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:16:48.748852 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:16:48.748858 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:16:48.748865 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:16:48.748872 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:16:48.748881 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:16:48.748886 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:16:48.748895 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 13 21:16:48.748902 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:16:48.748908 kernel: ACPI: Interpreter enabled Jan 13 21:16:48.748914 kernel: ACPI: PM: (supports S0 S1 S5) Jan 13 21:16:48.748919 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:16:48.748928 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:16:48.748933 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:16:48.748939 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 13 21:16:48.748945 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 13 21:16:48.749040 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:16:48.749114 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 13 21:16:48.749173 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 13 21:16:48.749185 kernel: PCI host bridge to bus 0000:00 Jan 13 21:16:48.749253 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:16:48.749310 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 13 21:16:48.749370 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:16:48.749416 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:16:48.749461 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 13 21:16:48.749547 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 13 21:16:48.749617 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 13 21:16:48.749688 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 13 21:16:48.749748 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 13 21:16:48.749818 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 13 21:16:48.749888 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 13 21:16:48.749943 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 21:16:48.750007 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 21:16:48.750076 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 21:16:48.750141 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 21:16:48.750202 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 13 21:16:48.750256 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 13 21:16:48.750317 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 13 21:16:48.750387 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 13 21:16:48.750451 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 13 21:16:48.750910 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 13 21:16:48.752544 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 13 21:16:48.752608 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 13 21:16:48.752664 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 13 21:16:48.752741 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 13 21:16:48.752804 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 13 21:16:48.752857 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:16:48.752914 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 13 21:16:48.752991 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753054 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753108 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753174 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753247 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753316 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753376 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753439 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753527 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753586 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753650 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753702 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753781 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753838 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753912 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753971 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.754042 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.754111 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.754167 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.754227 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.754297 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.754367 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.754447 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.758572 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.758637 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.758693 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.758748 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.758803 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.758857 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.758909 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.758963 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759014 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.759067 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759117 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.759173 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759224 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.759277 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759328 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.759383 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759434 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760445 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760525 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760585 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760637 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760692 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760743 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760802 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760853 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760907 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760958 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761012 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.761062 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761117 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.761171 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761225 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.761276 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761332 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.761383 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761437 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.762555 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.762620 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.762673 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.762726 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.762777 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.762830 kernel: pci_bus 0000:01: extended config space not accessible Jan 13 21:16:48.762886 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 21:16:48.762938 kernel: pci_bus 0000:02: extended config space not accessible Jan 13 21:16:48.762947 kernel: acpiphp: Slot [32] registered Jan 13 21:16:48.762953 kernel: acpiphp: Slot [33] registered Jan 13 21:16:48.762959 kernel: acpiphp: Slot [34] registered Jan 13 21:16:48.762965 kernel: acpiphp: Slot [35] registered Jan 13 21:16:48.762971 kernel: acpiphp: Slot [36] registered Jan 13 21:16:48.762977 kernel: acpiphp: Slot [37] registered Jan 13 21:16:48.762985 kernel: acpiphp: Slot [38] registered Jan 13 21:16:48.762991 kernel: acpiphp: Slot [39] registered Jan 13 21:16:48.762997 kernel: acpiphp: Slot [40] registered Jan 13 21:16:48.763003 kernel: acpiphp: Slot [41] registered Jan 13 21:16:48.763009 kernel: acpiphp: Slot [42] registered Jan 13 21:16:48.763014 kernel: acpiphp: Slot [43] registered Jan 13 21:16:48.763020 kernel: acpiphp: Slot [44] registered Jan 13 21:16:48.763026 kernel: acpiphp: Slot [45] registered Jan 13 21:16:48.763032 kernel: acpiphp: Slot [46] registered Jan 13 21:16:48.763038 kernel: acpiphp: Slot [47] registered Jan 13 21:16:48.763045 kernel: acpiphp: Slot [48] registered Jan 13 21:16:48.763051 kernel: acpiphp: Slot [49] registered Jan 13 21:16:48.763056 kernel: acpiphp: Slot [50] registered Jan 13 21:16:48.763062 kernel: acpiphp: Slot [51] registered Jan 13 21:16:48.763068 kernel: acpiphp: Slot [52] registered Jan 13 21:16:48.763074 kernel: acpiphp: Slot [53] registered Jan 13 21:16:48.763080 kernel: acpiphp: Slot [54] registered Jan 13 21:16:48.763086 kernel: acpiphp: Slot [55] registered Jan 13 21:16:48.763092 kernel: acpiphp: Slot [56] registered Jan 13 21:16:48.763099 kernel: acpiphp: Slot [57] registered Jan 13 21:16:48.763104 kernel: acpiphp: Slot [58] registered Jan 13 21:16:48.763110 kernel: acpiphp: Slot [59] registered Jan 13 21:16:48.763116 kernel: acpiphp: Slot [60] registered Jan 13 21:16:48.763122 kernel: acpiphp: Slot [61] registered Jan 13 21:16:48.763128 kernel: acpiphp: Slot [62] registered Jan 13 21:16:48.763134 kernel: acpiphp: Slot [63] registered Jan 13 21:16:48.763185 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 13 21:16:48.763235 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 13 21:16:48.763287 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 13 21:16:48.763337 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 21:16:48.763386 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 13 21:16:48.763441 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 13 21:16:48.764576 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 13 21:16:48.764631 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 13 21:16:48.764683 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 13 21:16:48.764742 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 13 21:16:48.764795 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 13 21:16:48.764846 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 13 21:16:48.764898 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 13 21:16:48.764949 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.765000 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 13 21:16:48.765052 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 13 21:16:48.765103 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 13 21:16:48.765156 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 13 21:16:48.765207 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 13 21:16:48.765258 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 13 21:16:48.765308 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 13 21:16:48.765358 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 21:16:48.765410 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 13 21:16:48.765460 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 13 21:16:48.766536 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 13 21:16:48.766587 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 21:16:48.766639 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 13 21:16:48.766689 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 13 21:16:48.766739 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 21:16:48.766789 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 13 21:16:48.766840 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 13 21:16:48.766889 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 21:16:48.766944 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 13 21:16:48.766995 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 13 21:16:48.767045 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 21:16:48.767096 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 13 21:16:48.767150 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 13 21:16:48.767200 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 21:16:48.767251 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 13 21:16:48.767301 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 13 21:16:48.767351 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 21:16:48.767407 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 13 21:16:48.768493 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 13 21:16:48.768551 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 13 21:16:48.768606 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 13 21:16:48.768657 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 13 21:16:48.768708 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 13 21:16:48.768760 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 13 21:16:48.768811 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:16:48.768862 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 13 21:16:48.768913 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 13 21:16:48.768964 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 13 21:16:48.769016 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 13 21:16:48.769067 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 13 21:16:48.769117 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 13 21:16:48.769167 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 13 21:16:48.769217 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 21:16:48.769268 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 13 21:16:48.769318 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 13 21:16:48.769371 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 13 21:16:48.769421 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 21:16:48.772506 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 13 21:16:48.772578 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 13 21:16:48.772644 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 21:16:48.772698 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 13 21:16:48.772749 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 13 21:16:48.772799 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 21:16:48.772854 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 13 21:16:48.772904 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 13 21:16:48.772954 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 21:16:48.773005 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 13 21:16:48.773055 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 13 21:16:48.773104 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 21:16:48.773155 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 13 21:16:48.773204 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 13 21:16:48.773257 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 21:16:48.773308 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 13 21:16:48.773358 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 13 21:16:48.773408 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 13 21:16:48.773457 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 21:16:48.773522 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 13 21:16:48.773572 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 13 21:16:48.773621 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 13 21:16:48.773674 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 21:16:48.773724 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 13 21:16:48.773774 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 13 21:16:48.773824 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 13 21:16:48.773875 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 21:16:48.773926 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 13 21:16:48.773976 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 13 21:16:48.774028 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 21:16:48.774079 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 13 21:16:48.774128 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 13 21:16:48.774177 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 21:16:48.774227 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 13 21:16:48.774277 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 13 21:16:48.774326 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 21:16:48.774377 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 13 21:16:48.774434 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 13 21:16:48.774506 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 21:16:48.774558 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 13 21:16:48.774607 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 13 21:16:48.774656 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 21:16:48.774707 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 13 21:16:48.774756 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 13 21:16:48.774804 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 13 21:16:48.774857 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 21:16:48.774908 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 13 21:16:48.774958 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 13 21:16:48.775008 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 13 21:16:48.775058 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 21:16:48.775145 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 13 21:16:48.775198 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 13 21:16:48.775248 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 21:16:48.775302 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 13 21:16:48.775352 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 13 21:16:48.775402 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 21:16:48.775454 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 13 21:16:48.775557 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 13 21:16:48.775986 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 21:16:48.776060 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 13 21:16:48.776125 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 13 21:16:48.776181 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 21:16:48.776246 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 13 21:16:48.776309 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 13 21:16:48.776361 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 21:16:48.776411 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 13 21:16:48.776461 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 13 21:16:48.776545 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 21:16:48.776554 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 13 21:16:48.776563 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 13 21:16:48.776569 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 13 21:16:48.776575 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:16:48.776581 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 13 21:16:48.776587 kernel: iommu: Default domain type: Translated Jan 13 21:16:48.776593 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:16:48.776599 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:16:48.776605 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:16:48.776611 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 13 21:16:48.776618 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 13 21:16:48.776668 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 13 21:16:48.776718 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 13 21:16:48.776778 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:16:48.776787 kernel: vgaarb: loaded Jan 13 21:16:48.776795 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 13 21:16:48.776803 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 13 21:16:48.776812 kernel: clocksource: Switched to clocksource tsc-early Jan 13 21:16:48.776821 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:16:48.776833 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:16:48.776840 kernel: pnp: PnP ACPI init Jan 13 21:16:48.776908 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 13 21:16:48.776964 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 13 21:16:48.777012 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 13 21:16:48.777071 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 13 21:16:48.777141 kernel: pnp 00:06: [dma 2] Jan 13 21:16:48.777198 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 13 21:16:48.777245 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 13 21:16:48.777291 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 13 21:16:48.777299 kernel: pnp: PnP ACPI: found 8 devices Jan 13 21:16:48.777306 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:16:48.777312 kernel: NET: Registered PF_INET protocol family Jan 13 21:16:48.777318 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:16:48.777324 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 21:16:48.777332 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:16:48.777338 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:16:48.777347 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:16:48.777355 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 21:16:48.777364 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:16:48.777372 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:16:48.777379 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:16:48.777387 kernel: NET: Registered PF_XDP protocol family Jan 13 21:16:48.777458 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 13 21:16:48.779545 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 21:16:48.779610 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 21:16:48.779669 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 21:16:48.779727 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 21:16:48.779792 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 13 21:16:48.779849 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 13 21:16:48.779905 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 13 21:16:48.779980 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 13 21:16:48.780046 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 13 21:16:48.780112 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 13 21:16:48.780171 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 13 21:16:48.780226 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 13 21:16:48.780289 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 13 21:16:48.780345 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 13 21:16:48.780399 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 13 21:16:48.781205 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 13 21:16:48.781278 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 13 21:16:48.781357 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 13 21:16:48.781423 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 13 21:16:48.781485 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 13 21:16:48.781542 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 13 21:16:48.781613 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 13 21:16:48.781677 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 21:16:48.781739 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 21:16:48.781802 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.781855 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.781922 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.781983 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782044 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782095 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782155 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782226 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782280 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782342 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782395 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782445 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782530 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782596 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782649 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782702 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782765 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782821 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782873 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782933 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782988 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783038 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783100 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783164 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783224 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783275 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783330 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783389 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783446 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783540 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783596 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783649 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783703 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783774 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783839 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783889 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783944 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784004 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784054 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784126 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784180 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784230 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784306 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784383 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784464 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784579 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784632 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784683 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784746 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784798 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784848 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784911 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784981 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785034 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785083 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785137 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785187 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785239 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785292 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785344 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785395 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785447 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785602 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785658 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785718 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785769 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785819 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785879 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785935 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785986 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786039 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786101 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786169 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786227 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786278 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786332 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786386 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786437 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786507 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786560 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786614 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786676 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786745 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786796 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786859 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 21:16:48.786911 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 13 21:16:48.786966 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 13 21:16:48.787015 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 13 21:16:48.787068 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 21:16:48.787135 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 13 21:16:48.787196 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 13 21:16:48.787253 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 13 21:16:48.787321 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 13 21:16:48.787379 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 21:16:48.787447 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 13 21:16:48.787520 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 13 21:16:48.787573 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 13 21:16:48.787624 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 21:16:48.787701 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 13 21:16:48.787756 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 13 21:16:48.787811 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 13 21:16:48.787866 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 21:16:48.787939 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 13 21:16:48.788000 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 13 21:16:48.788067 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 21:16:48.788120 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 13 21:16:48.788171 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 13 21:16:48.788221 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 21:16:48.788278 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 13 21:16:48.788333 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 13 21:16:48.788394 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 21:16:48.788449 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 13 21:16:48.788524 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 13 21:16:48.788585 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 21:16:48.788638 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 13 21:16:48.788697 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 13 21:16:48.788748 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 21:16:48.788802 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 13 21:16:48.788867 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 13 21:16:48.788921 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 13 21:16:48.788984 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 13 21:16:48.789039 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 21:16:48.789105 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 13 21:16:48.789166 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 13 21:16:48.789224 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 13 21:16:48.789282 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 21:16:48.789334 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 13 21:16:48.789387 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 13 21:16:48.789448 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 13 21:16:48.790046 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 21:16:48.790116 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 13 21:16:48.790173 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 13 21:16:48.790232 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 21:16:48.790297 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 13 21:16:48.790366 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 13 21:16:48.790418 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 21:16:48.790605 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 13 21:16:48.790667 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 13 21:16:48.790725 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 21:16:48.790791 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 13 21:16:48.790855 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 13 21:16:48.790921 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 21:16:48.790974 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 13 21:16:48.791023 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 13 21:16:48.791083 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 21:16:48.791136 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 13 21:16:48.791189 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 13 21:16:48.791248 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 13 21:16:48.791302 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 21:16:48.791369 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 13 21:16:48.791431 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 13 21:16:48.791498 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 13 21:16:48.791549 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 21:16:48.791604 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 13 21:16:48.791660 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 13 21:16:48.791710 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 13 21:16:48.791766 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 21:16:48.791824 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 13 21:16:48.791884 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 13 21:16:48.791948 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 21:16:48.792014 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 13 21:16:48.792070 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 13 21:16:48.792120 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 21:16:48.792180 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 13 21:16:48.792233 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 13 21:16:48.792286 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 21:16:48.792348 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 13 21:16:48.792404 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 13 21:16:48.792482 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 21:16:48.792554 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 13 21:16:48.792614 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 13 21:16:48.792666 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 21:16:48.792721 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 13 21:16:48.792778 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 13 21:16:48.792832 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 13 21:16:48.792894 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 21:16:48.792952 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 13 21:16:48.793016 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 13 21:16:48.793078 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 13 21:16:48.793145 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 21:16:48.793199 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 13 21:16:48.793250 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 13 21:16:48.793309 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 21:16:48.793363 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 13 21:16:48.793417 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 13 21:16:48.793484 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 21:16:48.793545 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 13 21:16:48.793604 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 13 21:16:48.793661 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 21:16:48.793720 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 13 21:16:48.793775 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 13 21:16:48.793826 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 21:16:48.793883 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 13 21:16:48.793940 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 13 21:16:48.793995 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 21:16:48.794050 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 13 21:16:48.794114 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 13 21:16:48.794169 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 21:16:48.794225 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 13 21:16:48.794276 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 13 21:16:48.794330 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 13 21:16:48.794375 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 13 21:16:48.794422 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 13 21:16:48.795095 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 13 21:16:48.795155 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 13 21:16:48.795211 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 21:16:48.795270 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 13 21:16:48.795323 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 13 21:16:48.795376 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 13 21:16:48.795434 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 13 21:16:48.795845 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 13 21:16:48.795913 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 13 21:16:48.795971 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 13 21:16:48.796027 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 21:16:48.796083 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 13 21:16:48.796131 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 13 21:16:48.796178 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 21:16:48.796247 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 13 21:16:48.796296 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 13 21:16:48.796351 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 21:16:48.796407 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 13 21:16:48.796462 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 21:16:48.796535 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 13 21:16:48.796594 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 21:16:48.796650 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 13 21:16:48.796696 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 21:16:48.796747 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 13 21:16:48.796794 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 21:16:48.796857 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 13 21:16:48.796915 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 21:16:48.796973 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 13 21:16:48.797039 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 13 21:16:48.797101 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 21:16:48.797161 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 13 21:16:48.797212 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 13 21:16:48.797271 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 21:16:48.797326 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 13 21:16:48.797375 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 13 21:16:48.797442 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 21:16:48.797505 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 13 21:16:48.797561 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 21:16:48.797620 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 13 21:16:48.797679 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 21:16:48.797739 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 13 21:16:48.797792 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 21:16:48.797857 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 13 21:16:48.797914 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 21:16:48.797966 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 13 21:16:48.798027 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 21:16:48.798082 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 13 21:16:48.798130 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 13 21:16:48.798181 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 21:16:48.798236 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 13 21:16:48.798295 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 13 21:16:48.798350 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 21:16:48.798413 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 13 21:16:48.798463 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 13 21:16:48.798536 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 21:16:48.798603 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 13 21:16:48.798653 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 21:16:48.798713 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 13 21:16:48.798765 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 21:16:48.798820 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 13 21:16:48.798878 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 21:16:48.798932 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 13 21:16:48.798989 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 21:16:48.799040 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 13 21:16:48.799090 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 21:16:48.799152 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 13 21:16:48.799201 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 13 21:16:48.799257 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 21:16:48.799313 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 13 21:16:48.799369 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 13 21:16:48.799423 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 21:16:48.799551 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 13 21:16:48.799603 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 21:16:48.799656 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 13 21:16:48.799710 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 21:16:48.799766 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 13 21:16:48.799824 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 21:16:48.799883 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 13 21:16:48.799941 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 21:16:48.799994 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 13 21:16:48.800044 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 21:16:48.800103 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 13 21:16:48.800151 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 21:16:48.800209 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:16:48.800220 kernel: PCI: CLS 32 bytes, default 64 Jan 13 21:16:48.800230 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:16:48.800237 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 13 21:16:48.800248 kernel: clocksource: Switched to clocksource tsc Jan 13 21:16:48.800255 kernel: Initialise system trusted keyrings Jan 13 21:16:48.800262 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 21:16:48.800268 kernel: Key type asymmetric registered Jan 13 21:16:48.800276 kernel: Asymmetric key parser 'x509' registered Jan 13 21:16:48.800283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:16:48.800289 kernel: io scheduler mq-deadline registered Jan 13 21:16:48.800297 kernel: io scheduler kyber registered Jan 13 21:16:48.800303 kernel: io scheduler bfq registered Jan 13 21:16:48.800361 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 13 21:16:48.800423 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800495 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 13 21:16:48.800554 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800613 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 13 21:16:48.800676 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800729 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 13 21:16:48.800784 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800844 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 13 21:16:48.800905 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800966 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 13 21:16:48.801025 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801091 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 13 21:16:48.801144 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801200 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 13 21:16:48.801262 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801316 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 13 21:16:48.801371 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801433 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 13 21:16:48.801498 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801710 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 13 21:16:48.801780 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801838 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 13 21:16:48.801893 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801952 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 13 21:16:48.802015 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.802072 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 13 21:16:48.802137 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.802197 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 13 21:16:48.802252 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.802305 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 13 21:16:48.802357 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.802695 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 13 21:16:48.802764 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.803540 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 13 21:16:48.803602 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.803666 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 13 21:16:48.803729 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.803786 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 13 21:16:48.803849 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.803912 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 13 21:16:48.803974 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.804029 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 13 21:16:48.804081 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.804134 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 13 21:16:48.804192 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.804253 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 13 21:16:48.804315 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.804372 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 13 21:16:48.804438 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.805609 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 13 21:16:48.805675 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.805741 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 13 21:16:48.805797 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.805850 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 13 21:16:48.805907 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.805974 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 13 21:16:48.806033 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.806096 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 13 21:16:48.806165 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.806228 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 13 21:16:48.806286 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.806350 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 13 21:16:48.806402 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.806411 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:16:48.806418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:16:48.806425 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:16:48.806431 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 13 21:16:48.806438 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:16:48.806446 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:16:48.806528 kernel: rtc_cmos 00:01: registered as rtc0 Jan 13 21:16:48.806593 kernel: rtc_cmos 00:01: setting system clock to 2025-01-13T21:16:48 UTC (1736803008) Jan 13 21:16:48.806605 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:16:48.806654 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 13 21:16:48.806664 kernel: intel_pstate: CPU model not supported Jan 13 21:16:48.806671 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:16:48.806677 kernel: Segment Routing with IPv6 Jan 13 21:16:48.806686 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:16:48.806692 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:16:48.806701 kernel: Key type dns_resolver registered Jan 13 21:16:48.806708 kernel: IPI shorthand broadcast: enabled Jan 13 21:16:48.806718 kernel: sched_clock: Marking stable (915003407, 232819235)->(1211295718, -63473076) Jan 13 21:16:48.806726 kernel: registered taskstats version 1 Jan 13 21:16:48.806732 kernel: Loading compiled-in X.509 certificates Jan 13 21:16:48.806739 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:16:48.806745 kernel: Key type .fscrypt registered Jan 13 21:16:48.806753 kernel: Key type fscrypt-provisioning registered Jan 13 21:16:48.806759 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:16:48.806768 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:16:48.806775 kernel: ima: No architecture policies found Jan 13 21:16:48.806781 kernel: clk: Disabling unused clocks Jan 13 21:16:48.806787 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:16:48.806793 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:16:48.806800 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:16:48.806807 kernel: Run /init as init process Jan 13 21:16:48.806818 kernel: with arguments: Jan 13 21:16:48.806828 kernel: /init Jan 13 21:16:48.806836 kernel: with environment: Jan 13 21:16:48.806842 kernel: HOME=/ Jan 13 21:16:48.806848 kernel: TERM=linux Jan 13 21:16:48.806854 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:16:48.806864 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:16:48.806874 systemd[1]: Detected virtualization vmware. Jan 13 21:16:48.806885 systemd[1]: Detected architecture x86-64. Jan 13 21:16:48.806892 systemd[1]: Running in initrd. Jan 13 21:16:48.806898 systemd[1]: No hostname configured, using default hostname. Jan 13 21:16:48.806904 systemd[1]: Hostname set to . Jan 13 21:16:48.806911 systemd[1]: Initializing machine ID from random generator. Jan 13 21:16:48.806918 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:16:48.806925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:16:48.806931 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:16:48.806940 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:16:48.806946 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:16:48.806953 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:16:48.806959 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:16:48.806967 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:16:48.806974 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:16:48.806981 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:16:48.806988 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:16:48.806994 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:16:48.807002 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:16:48.807009 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:16:48.807015 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:16:48.807024 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:16:48.807032 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:16:48.807042 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:16:48.807051 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:16:48.807058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:16:48.807064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:16:48.807071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:16:48.807077 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:16:48.807083 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:16:48.807090 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:16:48.807096 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:16:48.807103 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:16:48.807110 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:16:48.807117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:16:48.807123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:16:48.807132 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:16:48.807154 systemd-journald[215]: Collecting audit messages is disabled. Jan 13 21:16:48.807172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:16:48.807182 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:16:48.807189 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:16:48.807202 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:16:48.807209 kernel: Bridge firewalling registered Jan 13 21:16:48.807215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:16:48.807222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:16:48.807231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:16:48.807237 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:16:48.807244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:16:48.807253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:16:48.807262 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:16:48.807269 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:16:48.807276 systemd-journald[215]: Journal started Jan 13 21:16:48.807292 systemd-journald[215]: Runtime Journal (/run/log/journal/f447d01e665249af8706910bbb270349) is 4.8M, max 38.6M, 33.8M free. Jan 13 21:16:48.756158 systemd-modules-load[216]: Inserted module 'overlay' Jan 13 21:16:48.772607 systemd-modules-load[216]: Inserted module 'br_netfilter' Jan 13 21:16:48.817224 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:16:48.817252 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:16:48.816953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:16:48.820134 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:16:48.824637 dracut-cmdline[238]: dracut-dracut-053 Jan 13 21:16:48.828382 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:16:48.829313 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:16:48.835821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:16:48.851696 systemd-resolved[264]: Positive Trust Anchors: Jan 13 21:16:48.851705 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:16:48.851728 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:16:48.853690 systemd-resolved[264]: Defaulting to hostname 'linux'. Jan 13 21:16:48.854223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:16:48.854356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:16:48.877484 kernel: SCSI subsystem initialized Jan 13 21:16:48.883481 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:16:48.891486 kernel: iscsi: registered transport (tcp) Jan 13 21:16:48.906655 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:16:48.906697 kernel: QLogic iSCSI HBA Driver Jan 13 21:16:48.929583 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:16:48.933693 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:16:48.948640 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:16:48.948673 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:16:48.949949 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:16:48.983487 kernel: raid6: avx2x4 gen() 40709 MB/s Jan 13 21:16:49.000493 kernel: raid6: avx2x2 gen() 40075 MB/s Jan 13 21:16:49.017663 kernel: raid6: avx2x1 gen() 39785 MB/s Jan 13 21:16:49.017685 kernel: raid6: using algorithm avx2x4 gen() 40709 MB/s Jan 13 21:16:49.035679 kernel: raid6: .... xor() 21642 MB/s, rmw enabled Jan 13 21:16:49.035708 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:16:49.049489 kernel: xor: automatically using best checksumming function avx Jan 13 21:16:49.152498 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:16:49.158008 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:16:49.162562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:16:49.170172 systemd-udevd[433]: Using default interface naming scheme 'v255'. Jan 13 21:16:49.172977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:16:49.176652 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:16:49.185278 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Jan 13 21:16:49.199865 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:16:49.201604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:16:49.275080 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:16:49.278563 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:16:49.294508 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:16:49.295058 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:16:49.295972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:16:49.296255 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:16:49.302644 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:16:49.310788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:16:49.350486 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 13 21:16:49.358488 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 13 21:16:49.359668 kernel: vmw_pvscsi: using 64bit dma Jan 13 21:16:49.359687 kernel: vmw_pvscsi: max_id: 16 Jan 13 21:16:49.359695 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 13 21:16:49.366873 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 13 21:16:49.382330 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:16:49.382343 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 13 21:16:49.382355 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 13 21:16:49.382364 kernel: vmw_pvscsi: using MSI-X Jan 13 21:16:49.382373 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 13 21:16:49.382490 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 13 21:16:49.382512 kernel: libata version 3.00 loaded. Jan 13 21:16:49.381140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:16:49.381237 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:16:49.381456 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:16:49.381577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:16:49.381655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:16:49.381784 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:16:49.386223 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 13 21:16:49.395655 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 13 21:16:49.395754 kernel: scsi host1: ata_piix Jan 13 21:16:49.395826 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 13 21:16:49.395904 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 13 21:16:49.395973 kernel: scsi host2: ata_piix Jan 13 21:16:49.396034 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 13 21:16:49.396047 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 13 21:16:49.396055 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:16:49.396062 kernel: AES CTR mode by8 optimization enabled Jan 13 21:16:49.386863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:16:49.406417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:16:49.411594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:16:49.424984 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:16:49.549490 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 13 21:16:49.564543 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 13 21:16:49.580510 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 13 21:16:49.586640 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 13 21:16:49.586717 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 13 21:16:49.586780 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 13 21:16:49.586841 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 13 21:16:49.586900 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:16:49.586909 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 13 21:16:49.588806 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 13 21:16:49.601328 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:16:49.601345 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:16:49.654487 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (492) Jan 13 21:16:49.656735 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 13 21:16:49.659520 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 13 21:16:49.661475 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (489) Jan 13 21:16:49.664406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 13 21:16:49.667799 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 13 21:16:49.668063 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 13 21:16:49.674113 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:16:49.724495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:16:49.728475 kernel: GPT:disk_guids don't match. Jan 13 21:16:49.728502 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:16:49.728510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:16:50.738510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:16:50.738776 disk-uuid[589]: The operation has completed successfully. Jan 13 21:16:50.787237 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:16:50.787305 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:16:50.792564 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:16:50.794599 sh[608]: Success Jan 13 21:16:50.802486 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:16:50.897157 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:16:50.903333 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:16:50.903686 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:16:50.924488 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:16:50.924526 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:16:50.924535 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:16:50.924542 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:16:50.924715 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:16:50.943487 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:16:50.957421 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:16:50.967606 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 13 21:16:50.968926 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:16:50.985619 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:50.985662 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:16:50.985671 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:16:51.012480 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:16:51.017667 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:16:51.019481 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:51.021242 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:16:51.029633 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:16:51.063297 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 13 21:16:51.068552 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:16:51.136391 ignition[668]: Ignition 2.19.0 Jan 13 21:16:51.136398 ignition[668]: Stage: fetch-offline Jan 13 21:16:51.136416 ignition[668]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.136421 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.136496 ignition[668]: parsed url from cmdline: "" Jan 13 21:16:51.136498 ignition[668]: no config URL provided Jan 13 21:16:51.136502 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:16:51.136507 ignition[668]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:16:51.136868 ignition[668]: config successfully fetched Jan 13 21:16:51.136884 ignition[668]: parsing config with SHA512: bd47f444a28c99b0c5641d0e125045736d2be2f4007d9c01d2edfe57f86013ccb59b242afb0c907ae4258660c8e4d986a984ed0284c78c49fbc670d1fe240b79 Jan 13 21:16:51.138967 unknown[668]: fetched base config from "system" Jan 13 21:16:51.139205 ignition[668]: fetch-offline: fetch-offline passed Jan 13 21:16:51.138973 unknown[668]: fetched user config from "vmware" Jan 13 21:16:51.139241 ignition[668]: Ignition finished successfully Jan 13 21:16:51.139909 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:16:51.146537 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:16:51.152633 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:16:51.164101 systemd-networkd[802]: lo: Link UP Jan 13 21:16:51.164108 systemd-networkd[802]: lo: Gained carrier Jan 13 21:16:51.164820 systemd-networkd[802]: Enumeration completed Jan 13 21:16:51.164997 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:16:51.165074 systemd-networkd[802]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 13 21:16:51.165149 systemd[1]: Reached target network.target - Network. Jan 13 21:16:51.165236 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:16:51.168580 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 13 21:16:51.168690 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 13 21:16:51.168308 systemd-networkd[802]: ens192: Link UP Jan 13 21:16:51.168310 systemd-networkd[802]: ens192: Gained carrier Jan 13 21:16:51.172603 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:16:51.180617 ignition[805]: Ignition 2.19.0 Jan 13 21:16:51.180625 ignition[805]: Stage: kargs Jan 13 21:16:51.180725 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.180731 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.181257 ignition[805]: kargs: kargs passed Jan 13 21:16:51.181283 ignition[805]: Ignition finished successfully Jan 13 21:16:51.182363 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:16:51.186581 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:16:51.193909 ignition[812]: Ignition 2.19.0 Jan 13 21:16:51.194256 ignition[812]: Stage: disks Jan 13 21:16:51.194530 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.194536 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.195450 ignition[812]: disks: disks passed Jan 13 21:16:51.195509 ignition[812]: Ignition finished successfully Jan 13 21:16:51.196463 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:16:51.196809 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:16:51.196920 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:16:51.197031 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:16:51.197126 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:16:51.197222 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:16:51.201569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:16:51.212387 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:16:51.213747 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:16:51.218546 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:16:51.313517 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:16:51.313754 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:16:51.314242 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:16:51.319531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:16:51.320521 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:16:51.321600 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:16:51.321632 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:16:51.321649 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:16:51.325303 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:16:51.326652 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:16:51.328388 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (829) Jan 13 21:16:51.328410 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:51.329739 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:16:51.329760 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:16:51.333863 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:16:51.334138 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:16:51.360873 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:16:51.363497 initrd-setup-root[860]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:16:51.365935 initrd-setup-root[867]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:16:51.368184 initrd-setup-root[874]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:16:51.428517 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:16:51.432596 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:16:51.435070 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:16:51.438550 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:51.452537 ignition[942]: INFO : Ignition 2.19.0 Jan 13 21:16:51.452537 ignition[942]: INFO : Stage: mount Jan 13 21:16:51.452537 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.452537 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.453828 ignition[942]: INFO : mount: mount passed Jan 13 21:16:51.453828 ignition[942]: INFO : Ignition finished successfully Jan 13 21:16:51.453151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:16:51.454016 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:16:51.457558 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:16:51.920956 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:16:51.926598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:16:51.936482 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (954) Jan 13 21:16:51.938664 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:51.938681 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:16:51.938689 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:16:51.942479 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:16:51.943121 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:16:51.955043 ignition[971]: INFO : Ignition 2.19.0 Jan 13 21:16:51.955313 ignition[971]: INFO : Stage: files Jan 13 21:16:51.955417 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.955417 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.955952 ignition[971]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:16:51.958311 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:16:51.958311 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:16:51.962763 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:16:51.962998 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:16:51.963350 unknown[971]: wrote ssh authorized keys file for user: core Jan 13 21:16:51.963764 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:16:51.964752 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:16:51.965264 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:16:52.006747 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:16:52.178800 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:16:52.178800 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:16:52.668316 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:16:52.943171 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:16:52.943171 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 13 21:16:52.943697 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 13 21:16:52.943697 ignition[971]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:16:52.943697 ignition[971]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:16:52.983193 ignition[971]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:16:52.985492 ignition[971]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:16:52.985492 ignition[971]: INFO : files: files passed Jan 13 21:16:52.985492 ignition[971]: INFO : Ignition finished successfully Jan 13 21:16:52.987020 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:16:52.990563 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:16:52.991670 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:16:52.992909 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:16:52.993081 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:16:52.997581 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:16:52.997581 initrd-setup-root-after-ignition[1001]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:16:52.998653 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:16:52.999392 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:16:52.999827 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:16:53.002575 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:16:53.013737 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:16:53.013792 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:16:53.014178 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:16:53.014311 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:16:53.014523 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:16:53.014934 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:16:53.023798 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:16:53.028550 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:16:53.033399 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:16:53.033566 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:16:53.033798 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:16:53.033986 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:16:53.034052 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:16:53.034400 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:16:53.034571 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:16:53.034747 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:16:53.034931 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:16:53.035127 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:16:53.035498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:16:53.035663 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:16:53.035867 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:16:53.036069 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:16:53.036253 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:16:53.036409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:16:53.036525 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:16:53.036784 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:16:53.037015 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:16:53.037193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:16:53.037232 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:16:53.037407 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:16:53.037465 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:16:53.037822 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:16:53.037886 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:16:53.038111 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:16:53.038240 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:16:53.043486 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:16:53.043665 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:16:53.043857 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:16:53.044047 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:16:53.044115 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:16:53.044299 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:16:53.044344 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:16:53.044583 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:16:53.044642 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:16:53.044883 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:16:53.044938 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:16:53.053559 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:16:53.053663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:16:53.053727 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:16:53.055294 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:16:53.055408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:16:53.055504 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:16:53.055784 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:16:53.055862 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:16:53.059240 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:16:53.059571 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:16:53.062474 ignition[1025]: INFO : Ignition 2.19.0 Jan 13 21:16:53.062474 ignition[1025]: INFO : Stage: umount Jan 13 21:16:53.062474 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:53.062474 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:53.065072 ignition[1025]: INFO : umount: umount passed Jan 13 21:16:53.065072 ignition[1025]: INFO : Ignition finished successfully Jan 13 21:16:53.064023 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:16:53.064082 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:16:53.064257 systemd[1]: Stopped target network.target - Network. Jan 13 21:16:53.064350 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:16:53.064375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:16:53.064500 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:16:53.064522 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:16:53.064666 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:16:53.064687 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:16:53.064820 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:16:53.064840 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:16:53.065050 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:16:53.065203 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:16:53.070046 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:16:53.070102 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:16:53.070463 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:16:53.070504 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:16:53.075221 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:16:53.075361 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:16:53.075390 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:16:53.075641 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 13 21:16:53.075665 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 13 21:16:53.075853 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:16:53.077416 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:16:53.077760 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:16:53.077810 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:16:53.079784 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:16:53.079833 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:16:53.080299 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:16:53.080322 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:16:53.080717 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:16:53.080895 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:16:53.083809 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:16:53.083873 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:16:53.090797 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:16:53.090877 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:16:53.091149 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:16:53.091173 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:16:53.091384 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:16:53.091400 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:16:53.091565 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:16:53.091587 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:16:53.091856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:16:53.091877 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:16:53.092316 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:16:53.092337 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:16:53.096547 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:16:53.096653 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:16:53.096679 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:16:53.096805 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:16:53.096828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:16:53.099352 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:16:53.099404 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:16:53.157986 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:16:53.158043 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:16:53.158402 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:16:53.158530 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:16:53.158558 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:16:53.161558 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:16:53.179563 systemd[1]: Switching root. Jan 13 21:16:53.230573 systemd-journald[215]: Journal stopped Jan 13 21:16:48.745557 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:16:48.745577 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:16:48.745587 kernel: Disabled fast string operations Jan 13 21:16:48.745593 kernel: BIOS-provided physical RAM map: Jan 13 21:16:48.745597 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 13 21:16:48.745601 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 13 21:16:48.745608 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 13 21:16:48.745612 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 13 21:16:48.745616 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 13 21:16:48.745620 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 13 21:16:48.745624 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 13 21:16:48.745629 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 13 21:16:48.745633 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 13 21:16:48.745637 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 21:16:48.745643 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 13 21:16:48.745648 kernel: NX (Execute Disable) protection: active Jan 13 21:16:48.745653 kernel: APIC: Static calls initialized Jan 13 21:16:48.745658 kernel: SMBIOS 2.7 present. Jan 13 21:16:48.745663 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 13 21:16:48.745667 kernel: vmware: hypercall mode: 0x00 Jan 13 21:16:48.745673 kernel: Hypervisor detected: VMware Jan 13 21:16:48.745681 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 13 21:16:48.745688 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 13 21:16:48.745695 kernel: vmware: using clock offset of 4504381093 ns Jan 13 21:16:48.745701 kernel: tsc: Detected 3408.000 MHz processor Jan 13 21:16:48.745706 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:16:48.745712 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:16:48.745717 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 13 21:16:48.745721 kernel: total RAM covered: 3072M Jan 13 21:16:48.745726 kernel: Found optimal setting for mtrr clean up Jan 13 21:16:48.745733 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 13 21:16:48.745741 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 13 21:16:48.745749 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:16:48.745757 kernel: Using GB pages for direct mapping Jan 13 21:16:48.745765 kernel: ACPI: Early table checksum verification disabled Jan 13 21:16:48.745770 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 13 21:16:48.745775 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 13 21:16:48.745780 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 13 21:16:48.745785 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 13 21:16:48.745793 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 13 21:16:48.745806 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 13 21:16:48.745813 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 13 21:16:48.745818 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 13 21:16:48.745827 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 13 21:16:48.745833 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 13 21:16:48.745840 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 13 21:16:48.745849 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 13 21:16:48.745855 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 13 21:16:48.745863 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 13 21:16:48.745871 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 13 21:16:48.745876 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 13 21:16:48.745881 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 13 21:16:48.745886 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 13 21:16:48.745892 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 13 21:16:48.745897 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 13 21:16:48.745905 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 13 21:16:48.745911 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 13 21:16:48.745916 kernel: system APIC only can use physical flat Jan 13 21:16:48.745922 kernel: APIC: Switched APIC routing to: physical flat Jan 13 21:16:48.745927 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:16:48.745932 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 13 21:16:48.745937 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 13 21:16:48.745942 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 13 21:16:48.745951 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 13 21:16:48.745958 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 13 21:16:48.745963 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 13 21:16:48.745968 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 13 21:16:48.745973 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 13 21:16:48.745981 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 13 21:16:48.745988 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 13 21:16:48.745995 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 13 21:16:48.746004 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 13 21:16:48.746012 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 13 21:16:48.746020 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 13 21:16:48.746027 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 13 21:16:48.746032 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 13 21:16:48.746037 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 13 21:16:48.746042 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 13 21:16:48.746047 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 13 21:16:48.746052 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 13 21:16:48.746057 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 13 21:16:48.746062 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 13 21:16:48.746068 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 13 21:16:48.746074 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 13 21:16:48.746081 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 13 21:16:48.746086 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 13 21:16:48.746091 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 13 21:16:48.746096 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 13 21:16:48.746101 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 13 21:16:48.746109 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 13 21:16:48.746118 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 13 21:16:48.746127 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 13 21:16:48.746132 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 13 21:16:48.746138 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 13 21:16:48.746148 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 13 21:16:48.746153 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 13 21:16:48.746160 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 13 21:16:48.746166 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 13 21:16:48.746172 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 13 21:16:48.746177 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 13 21:16:48.746182 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 13 21:16:48.746187 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 13 21:16:48.746192 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 13 21:16:48.746197 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 13 21:16:48.746203 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 13 21:16:48.746208 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 13 21:16:48.746213 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 13 21:16:48.746218 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 13 21:16:48.746223 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 13 21:16:48.746228 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 13 21:16:48.746233 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 13 21:16:48.746238 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 13 21:16:48.746243 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 13 21:16:48.746248 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 13 21:16:48.746254 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 13 21:16:48.746259 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 13 21:16:48.746264 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 13 21:16:48.746270 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 13 21:16:48.746279 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 13 21:16:48.746288 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 13 21:16:48.746293 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 13 21:16:48.746300 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 13 21:16:48.746308 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 13 21:16:48.746319 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 13 21:16:48.746329 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 13 21:16:48.746336 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 13 21:16:48.746344 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 13 21:16:48.746349 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 13 21:16:48.746355 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 13 21:16:48.746360 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 13 21:16:48.746368 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 13 21:16:48.746377 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 13 21:16:48.746387 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 13 21:16:48.746392 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 13 21:16:48.746398 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 13 21:16:48.746403 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 13 21:16:48.746409 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 13 21:16:48.746414 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 13 21:16:48.746419 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 13 21:16:48.746428 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 13 21:16:48.746435 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 13 21:16:48.746440 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 13 21:16:48.746447 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 13 21:16:48.746454 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 13 21:16:48.746463 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 13 21:16:48.746494 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 13 21:16:48.746505 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 13 21:16:48.746511 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 13 21:16:48.746517 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 13 21:16:48.746522 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 13 21:16:48.746527 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 13 21:16:48.746533 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 13 21:16:48.746542 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 13 21:16:48.746548 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 13 21:16:48.746554 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 13 21:16:48.746559 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 13 21:16:48.746565 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 13 21:16:48.746570 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 13 21:16:48.746575 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 13 21:16:48.746582 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 13 21:16:48.746590 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 13 21:16:48.746595 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 13 21:16:48.746602 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 13 21:16:48.746608 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 13 21:16:48.746614 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 13 21:16:48.746622 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 13 21:16:48.746631 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 13 21:16:48.746641 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 13 21:16:48.746651 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 13 21:16:48.746657 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 13 21:16:48.746663 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 13 21:16:48.746670 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 13 21:16:48.746682 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 13 21:16:48.746687 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 13 21:16:48.746693 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 13 21:16:48.746698 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 13 21:16:48.746704 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 13 21:16:48.746709 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 13 21:16:48.746714 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 13 21:16:48.746720 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 13 21:16:48.746725 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 13 21:16:48.746730 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 13 21:16:48.746737 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 13 21:16:48.746743 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 13 21:16:48.746748 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 13 21:16:48.746753 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 13 21:16:48.746759 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 13 21:16:48.746768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:16:48.746774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 13 21:16:48.746782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 13 21:16:48.746789 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 13 21:16:48.746796 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 13 21:16:48.746802 kernel: Zone ranges: Jan 13 21:16:48.746807 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:16:48.746813 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 13 21:16:48.746818 kernel: Normal empty Jan 13 21:16:48.746824 kernel: Movable zone start for each node Jan 13 21:16:48.746831 kernel: Early memory node ranges Jan 13 21:16:48.746837 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 13 21:16:48.746846 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 13 21:16:48.746855 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 13 21:16:48.746866 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 13 21:16:48.746872 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:16:48.746880 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 13 21:16:48.746886 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 13 21:16:48.746891 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 13 21:16:48.746897 kernel: system APIC only can use physical flat Jan 13 21:16:48.746902 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 13 21:16:48.746908 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 21:16:48.746913 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 21:16:48.746920 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 21:16:48.746926 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 21:16:48.746935 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 21:16:48.746942 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 21:16:48.746951 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 21:16:48.746959 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 21:16:48.746967 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 21:16:48.746974 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 21:16:48.746979 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 21:16:48.746985 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 21:16:48.746992 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 21:16:48.746997 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 21:16:48.747005 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 21:16:48.747011 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 21:16:48.747016 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 13 21:16:48.747021 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 13 21:16:48.747027 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 13 21:16:48.747033 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 13 21:16:48.747038 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 13 21:16:48.747048 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 13 21:16:48.747054 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 13 21:16:48.747059 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 13 21:16:48.747065 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 13 21:16:48.747070 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 13 21:16:48.747078 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 13 21:16:48.747085 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 13 21:16:48.747094 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 13 21:16:48.747103 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 13 21:16:48.747112 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 13 21:16:48.747119 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 13 21:16:48.747125 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 13 21:16:48.747130 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 13 21:16:48.747136 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 13 21:16:48.747141 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 13 21:16:48.747148 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 13 21:16:48.747155 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 13 21:16:48.747160 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 13 21:16:48.747166 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 13 21:16:48.747171 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 13 21:16:48.747178 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 13 21:16:48.747184 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 13 21:16:48.747194 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 13 21:16:48.747203 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 13 21:16:48.747212 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 13 21:16:48.747219 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 13 21:16:48.747226 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 13 21:16:48.747232 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 13 21:16:48.747241 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 13 21:16:48.747248 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 13 21:16:48.747254 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 13 21:16:48.747259 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 13 21:16:48.747265 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 13 21:16:48.747270 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 13 21:16:48.747275 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 13 21:16:48.747281 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 13 21:16:48.747286 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 13 21:16:48.747292 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 13 21:16:48.747297 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 13 21:16:48.747304 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 13 21:16:48.747309 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 13 21:16:48.747314 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 13 21:16:48.747320 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 13 21:16:48.747325 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 13 21:16:48.747330 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 13 21:16:48.747336 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 13 21:16:48.747341 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 13 21:16:48.747346 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 13 21:16:48.747351 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 13 21:16:48.747360 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 13 21:16:48.747366 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 13 21:16:48.747372 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 13 21:16:48.747380 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 13 21:16:48.747388 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 13 21:16:48.747398 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 13 21:16:48.747407 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 13 21:16:48.747417 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 13 21:16:48.747428 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 13 21:16:48.747436 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 13 21:16:48.747442 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 13 21:16:48.747447 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 13 21:16:48.747453 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 13 21:16:48.747459 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 13 21:16:48.747472 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 13 21:16:48.747479 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 13 21:16:48.747486 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 13 21:16:48.747494 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 13 21:16:48.747499 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 13 21:16:48.747506 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 13 21:16:48.747512 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 13 21:16:48.747517 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 13 21:16:48.747523 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 13 21:16:48.747530 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 13 21:16:48.747538 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 13 21:16:48.747547 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 13 21:16:48.747552 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 13 21:16:48.747558 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 13 21:16:48.747563 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 13 21:16:48.747570 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 13 21:16:48.747575 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 13 21:16:48.747582 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 13 21:16:48.747589 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 13 21:16:48.747594 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 13 21:16:48.747600 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 13 21:16:48.747605 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 13 21:16:48.747610 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 13 21:16:48.747616 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 13 21:16:48.747625 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 13 21:16:48.747632 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 13 21:16:48.747637 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 13 21:16:48.747643 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 13 21:16:48.747648 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 13 21:16:48.747657 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 13 21:16:48.747665 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 13 21:16:48.747672 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 13 21:16:48.747682 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 13 21:16:48.747691 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 13 21:16:48.747698 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 13 21:16:48.747704 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 13 21:16:48.747712 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 13 21:16:48.747721 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 13 21:16:48.747727 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 13 21:16:48.747732 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 13 21:16:48.747738 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 13 21:16:48.747744 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 13 21:16:48.747749 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 13 21:16:48.747754 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:16:48.747761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 13 21:16:48.747767 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:16:48.747772 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 13 21:16:48.747778 kernel: TSC deadline timer available Jan 13 21:16:48.747783 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 13 21:16:48.747789 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 13 21:16:48.747809 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 13 21:16:48.747819 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:16:48.747825 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 13 21:16:48.747833 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 21:16:48.747838 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 21:16:48.747844 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 13 21:16:48.747849 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 13 21:16:48.747854 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 13 21:16:48.747860 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 13 21:16:48.747866 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 13 21:16:48.747884 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 13 21:16:48.747896 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 13 21:16:48.747906 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 13 21:16:48.747915 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 13 21:16:48.747922 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 13 21:16:48.747927 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 13 21:16:48.747933 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 13 21:16:48.747938 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 13 21:16:48.747944 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 13 21:16:48.747950 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 13 21:16:48.747957 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 13 21:16:48.747967 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:16:48.747976 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:16:48.747984 kernel: random: crng init done Jan 13 21:16:48.747991 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 13 21:16:48.748001 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 13 21:16:48.748009 kernel: printk: log_buf_len min size: 262144 bytes Jan 13 21:16:48.748015 kernel: printk: log_buf_len: 1048576 bytes Jan 13 21:16:48.748023 kernel: printk: early log buf free: 239648(91%) Jan 13 21:16:48.748029 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:16:48.748035 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:16:48.748041 kernel: Fallback order for Node 0: 0 Jan 13 21:16:48.748049 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 13 21:16:48.748055 kernel: Policy zone: DMA32 Jan 13 21:16:48.748061 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:16:48.748067 kernel: Memory: 1936376K/2096628K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 159992K reserved, 0K cma-reserved) Jan 13 21:16:48.748075 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 13 21:16:48.748081 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:16:48.748090 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:16:48.748098 kernel: Dynamic Preempt: voluntary Jan 13 21:16:48.748104 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:16:48.748110 kernel: rcu: RCU event tracing is enabled. Jan 13 21:16:48.748116 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 13 21:16:48.748127 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:16:48.748136 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:16:48.748146 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:16:48.748156 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:16:48.748162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 13 21:16:48.748167 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 13 21:16:48.748173 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 13 21:16:48.748179 kernel: Console: colour VGA+ 80x25 Jan 13 21:16:48.748185 kernel: printk: console [tty0] enabled Jan 13 21:16:48.748191 kernel: printk: console [ttyS0] enabled Jan 13 21:16:48.748201 kernel: ACPI: Core revision 20230628 Jan 13 21:16:48.748207 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 13 21:16:48.748213 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:16:48.748219 kernel: x2apic enabled Jan 13 21:16:48.748225 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:16:48.748234 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:16:48.748243 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 13 21:16:48.748253 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 13 21:16:48.748262 kernel: Disabled fast string operations Jan 13 21:16:48.748273 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 21:16:48.748279 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 21:16:48.748288 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:16:48.748294 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:16:48.748300 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:16:48.748307 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 21:16:48.748312 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:16:48.748318 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 21:16:48.748324 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 21:16:48.748332 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:16:48.748338 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:16:48.748343 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:16:48.748349 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 13 21:16:48.748355 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 21:16:48.748361 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:16:48.748367 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:16:48.748373 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:16:48.748379 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:16:48.748386 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:16:48.748392 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:16:48.748398 kernel: pid_max: default: 131072 minimum: 1024 Jan 13 21:16:48.748406 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:16:48.748412 kernel: landlock: Up and running. Jan 13 21:16:48.748422 kernel: SELinux: Initializing. Jan 13 21:16:48.748428 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:16:48.748438 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:16:48.748450 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 21:16:48.748462 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 21:16:48.748483 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 21:16:48.748495 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 13 21:16:48.748501 kernel: Performance Events: Skylake events, core PMU driver. Jan 13 21:16:48.748508 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 13 21:16:48.748515 kernel: core: CPUID marked event: 'instructions' unavailable Jan 13 21:16:48.748521 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 13 21:16:48.748526 kernel: core: CPUID marked event: 'cache references' unavailable Jan 13 21:16:48.748535 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 13 21:16:48.748540 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 13 21:16:48.748550 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 13 21:16:48.748556 kernel: ... version: 1 Jan 13 21:16:48.748562 kernel: ... bit width: 48 Jan 13 21:16:48.748567 kernel: ... generic registers: 4 Jan 13 21:16:48.748573 kernel: ... value mask: 0000ffffffffffff Jan 13 21:16:48.748581 kernel: ... max period: 000000007fffffff Jan 13 21:16:48.748591 kernel: ... fixed-purpose events: 0 Jan 13 21:16:48.748600 kernel: ... event mask: 000000000000000f Jan 13 21:16:48.748609 kernel: signal: max sigframe size: 1776 Jan 13 21:16:48.748619 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:16:48.748626 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:16:48.748632 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:16:48.748637 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:16:48.748643 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:16:48.748649 kernel: .... node #0, CPUs: #1 Jan 13 21:16:48.748655 kernel: Disabled fast string operations Jan 13 21:16:48.748665 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 13 21:16:48.748671 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 13 21:16:48.748677 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:16:48.748683 kernel: smpboot: Max logical packages: 128 Jan 13 21:16:48.748689 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 13 21:16:48.748694 kernel: devtmpfs: initialized Jan 13 21:16:48.748704 kernel: x86/mm: Memory block size: 128MB Jan 13 21:16:48.748710 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 13 21:16:48.748716 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:16:48.748722 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 13 21:16:48.748731 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:16:48.748740 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:16:48.748748 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:16:48.748759 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:16:48.748769 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:16:48.748775 kernel: audit: type=2000 audit(1736803006.070:1): state=initialized audit_enabled=0 res=1 Jan 13 21:16:48.748781 kernel: cpuidle: using governor menu Jan 13 21:16:48.748792 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 13 21:16:48.748802 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:16:48.748810 kernel: dca service started, version 1.12.1 Jan 13 21:16:48.748815 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 13 21:16:48.748822 kernel: PCI: Using configuration type 1 for base access Jan 13 21:16:48.748829 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:16:48.748834 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:16:48.748840 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:16:48.748846 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:16:48.748852 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:16:48.748858 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:16:48.748865 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:16:48.748872 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:16:48.748881 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:16:48.748886 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:16:48.748895 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 13 21:16:48.748902 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:16:48.748908 kernel: ACPI: Interpreter enabled Jan 13 21:16:48.748914 kernel: ACPI: PM: (supports S0 S1 S5) Jan 13 21:16:48.748919 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:16:48.748928 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:16:48.748933 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:16:48.748939 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 13 21:16:48.748945 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 13 21:16:48.749040 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:16:48.749114 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 13 21:16:48.749173 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 13 21:16:48.749185 kernel: PCI host bridge to bus 0000:00 Jan 13 21:16:48.749253 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:16:48.749310 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 13 21:16:48.749370 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:16:48.749416 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:16:48.749461 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 13 21:16:48.749547 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 13 21:16:48.749617 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 13 21:16:48.749688 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 13 21:16:48.749748 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 13 21:16:48.749818 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 13 21:16:48.749888 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 13 21:16:48.749943 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 21:16:48.750007 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 21:16:48.750076 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 21:16:48.750141 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 21:16:48.750202 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 13 21:16:48.750256 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 13 21:16:48.750317 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 13 21:16:48.750387 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 13 21:16:48.750451 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 13 21:16:48.750910 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 13 21:16:48.752544 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 13 21:16:48.752608 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 13 21:16:48.752664 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 13 21:16:48.752741 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 13 21:16:48.752804 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 13 21:16:48.752857 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:16:48.752914 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 13 21:16:48.752991 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753054 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753108 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753174 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753247 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753316 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753376 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753439 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753527 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753586 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753650 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753702 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753781 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753838 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.753912 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.753971 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.754042 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.754111 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.754167 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.754227 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.754297 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.754367 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.754447 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.758572 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.758637 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.758693 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.758748 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.758803 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.758857 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.758909 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.758963 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759014 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.759067 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759117 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.759173 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759224 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.759277 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759328 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.759383 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.759434 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760445 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760525 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760585 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760637 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760692 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760743 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760802 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760853 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.760907 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.760958 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761012 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.761062 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761117 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.761171 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761225 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.761276 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761332 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.761383 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.761437 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.762555 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.762620 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.762673 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.762726 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 13 21:16:48.762777 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.762830 kernel: pci_bus 0000:01: extended config space not accessible Jan 13 21:16:48.762886 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 21:16:48.762938 kernel: pci_bus 0000:02: extended config space not accessible Jan 13 21:16:48.762947 kernel: acpiphp: Slot [32] registered Jan 13 21:16:48.762953 kernel: acpiphp: Slot [33] registered Jan 13 21:16:48.762959 kernel: acpiphp: Slot [34] registered Jan 13 21:16:48.762965 kernel: acpiphp: Slot [35] registered Jan 13 21:16:48.762971 kernel: acpiphp: Slot [36] registered Jan 13 21:16:48.762977 kernel: acpiphp: Slot [37] registered Jan 13 21:16:48.762985 kernel: acpiphp: Slot [38] registered Jan 13 21:16:48.762991 kernel: acpiphp: Slot [39] registered Jan 13 21:16:48.762997 kernel: acpiphp: Slot [40] registered Jan 13 21:16:48.763003 kernel: acpiphp: Slot [41] registered Jan 13 21:16:48.763009 kernel: acpiphp: Slot [42] registered Jan 13 21:16:48.763014 kernel: acpiphp: Slot [43] registered Jan 13 21:16:48.763020 kernel: acpiphp: Slot [44] registered Jan 13 21:16:48.763026 kernel: acpiphp: Slot [45] registered Jan 13 21:16:48.763032 kernel: acpiphp: Slot [46] registered Jan 13 21:16:48.763038 kernel: acpiphp: Slot [47] registered Jan 13 21:16:48.763045 kernel: acpiphp: Slot [48] registered Jan 13 21:16:48.763051 kernel: acpiphp: Slot [49] registered Jan 13 21:16:48.763056 kernel: acpiphp: Slot [50] registered Jan 13 21:16:48.763062 kernel: acpiphp: Slot [51] registered Jan 13 21:16:48.763068 kernel: acpiphp: Slot [52] registered Jan 13 21:16:48.763074 kernel: acpiphp: Slot [53] registered Jan 13 21:16:48.763080 kernel: acpiphp: Slot [54] registered Jan 13 21:16:48.763086 kernel: acpiphp: Slot [55] registered Jan 13 21:16:48.763092 kernel: acpiphp: Slot [56] registered Jan 13 21:16:48.763099 kernel: acpiphp: Slot [57] registered Jan 13 21:16:48.763104 kernel: acpiphp: Slot [58] registered Jan 13 21:16:48.763110 kernel: acpiphp: Slot [59] registered Jan 13 21:16:48.763116 kernel: acpiphp: Slot [60] registered Jan 13 21:16:48.763122 kernel: acpiphp: Slot [61] registered Jan 13 21:16:48.763128 kernel: acpiphp: Slot [62] registered Jan 13 21:16:48.763134 kernel: acpiphp: Slot [63] registered Jan 13 21:16:48.763185 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 13 21:16:48.763235 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 13 21:16:48.763287 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 13 21:16:48.763337 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 21:16:48.763386 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 13 21:16:48.763441 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 13 21:16:48.764576 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 13 21:16:48.764631 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 13 21:16:48.764683 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 13 21:16:48.764742 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 13 21:16:48.764795 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 13 21:16:48.764846 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 13 21:16:48.764898 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 13 21:16:48.764949 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 21:16:48.765000 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 13 21:16:48.765052 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 13 21:16:48.765103 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 13 21:16:48.765156 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 13 21:16:48.765207 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 13 21:16:48.765258 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 13 21:16:48.765308 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 13 21:16:48.765358 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 21:16:48.765410 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 13 21:16:48.765460 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 13 21:16:48.766536 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 13 21:16:48.766587 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 21:16:48.766639 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 13 21:16:48.766689 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 13 21:16:48.766739 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 21:16:48.766789 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 13 21:16:48.766840 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 13 21:16:48.766889 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 21:16:48.766944 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 13 21:16:48.766995 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 13 21:16:48.767045 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 21:16:48.767096 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 13 21:16:48.767150 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 13 21:16:48.767200 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 21:16:48.767251 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 13 21:16:48.767301 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 13 21:16:48.767351 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 21:16:48.767407 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 13 21:16:48.768493 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 13 21:16:48.768551 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 13 21:16:48.768606 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 13 21:16:48.768657 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 13 21:16:48.768708 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 13 21:16:48.768760 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 13 21:16:48.768811 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:16:48.768862 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 13 21:16:48.768913 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 13 21:16:48.768964 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 13 21:16:48.769016 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 13 21:16:48.769067 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 13 21:16:48.769117 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 13 21:16:48.769167 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 13 21:16:48.769217 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 21:16:48.769268 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 13 21:16:48.769318 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 13 21:16:48.769371 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 13 21:16:48.769421 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 21:16:48.772506 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 13 21:16:48.772578 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 13 21:16:48.772644 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 21:16:48.772698 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 13 21:16:48.772749 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 13 21:16:48.772799 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 21:16:48.772854 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 13 21:16:48.772904 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 13 21:16:48.772954 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 21:16:48.773005 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 13 21:16:48.773055 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 13 21:16:48.773104 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 21:16:48.773155 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 13 21:16:48.773204 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 13 21:16:48.773257 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 21:16:48.773308 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 13 21:16:48.773358 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 13 21:16:48.773408 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 13 21:16:48.773457 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 21:16:48.773522 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 13 21:16:48.773572 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 13 21:16:48.773621 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 13 21:16:48.773674 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 21:16:48.773724 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 13 21:16:48.773774 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 13 21:16:48.773824 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 13 21:16:48.773875 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 21:16:48.773926 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 13 21:16:48.773976 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 13 21:16:48.774028 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 21:16:48.774079 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 13 21:16:48.774128 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 13 21:16:48.774177 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 21:16:48.774227 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 13 21:16:48.774277 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 13 21:16:48.774326 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 21:16:48.774377 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 13 21:16:48.774434 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 13 21:16:48.774506 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 21:16:48.774558 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 13 21:16:48.774607 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 13 21:16:48.774656 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 21:16:48.774707 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 13 21:16:48.774756 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 13 21:16:48.774804 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 13 21:16:48.774857 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 21:16:48.774908 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 13 21:16:48.774958 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 13 21:16:48.775008 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 13 21:16:48.775058 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 21:16:48.775145 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 13 21:16:48.775198 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 13 21:16:48.775248 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 21:16:48.775302 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 13 21:16:48.775352 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 13 21:16:48.775402 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 21:16:48.775454 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 13 21:16:48.775557 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 13 21:16:48.775986 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 21:16:48.776060 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 13 21:16:48.776125 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 13 21:16:48.776181 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 21:16:48.776246 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 13 21:16:48.776309 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 13 21:16:48.776361 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 21:16:48.776411 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 13 21:16:48.776461 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 13 21:16:48.776545 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 21:16:48.776554 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 13 21:16:48.776563 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 13 21:16:48.776569 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 13 21:16:48.776575 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:16:48.776581 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 13 21:16:48.776587 kernel: iommu: Default domain type: Translated Jan 13 21:16:48.776593 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:16:48.776599 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:16:48.776605 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:16:48.776611 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 13 21:16:48.776618 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 13 21:16:48.776668 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 13 21:16:48.776718 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 13 21:16:48.776778 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:16:48.776787 kernel: vgaarb: loaded Jan 13 21:16:48.776795 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 13 21:16:48.776803 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 13 21:16:48.776812 kernel: clocksource: Switched to clocksource tsc-early Jan 13 21:16:48.776821 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:16:48.776833 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:16:48.776840 kernel: pnp: PnP ACPI init Jan 13 21:16:48.776908 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 13 21:16:48.776964 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 13 21:16:48.777012 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 13 21:16:48.777071 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 13 21:16:48.777141 kernel: pnp 00:06: [dma 2] Jan 13 21:16:48.777198 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 13 21:16:48.777245 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 13 21:16:48.777291 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 13 21:16:48.777299 kernel: pnp: PnP ACPI: found 8 devices Jan 13 21:16:48.777306 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:16:48.777312 kernel: NET: Registered PF_INET protocol family Jan 13 21:16:48.777318 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:16:48.777324 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 21:16:48.777332 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:16:48.777338 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:16:48.777347 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:16:48.777355 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 21:16:48.777364 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:16:48.777372 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:16:48.777379 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:16:48.777387 kernel: NET: Registered PF_XDP protocol family Jan 13 21:16:48.777458 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 13 21:16:48.779545 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 21:16:48.779610 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 21:16:48.779669 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 21:16:48.779727 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 21:16:48.779792 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 13 21:16:48.779849 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 13 21:16:48.779905 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 13 21:16:48.779980 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 13 21:16:48.780046 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 13 21:16:48.780112 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 13 21:16:48.780171 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 13 21:16:48.780226 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 13 21:16:48.780289 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 13 21:16:48.780345 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 13 21:16:48.780399 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 13 21:16:48.781205 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 13 21:16:48.781278 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 13 21:16:48.781357 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 13 21:16:48.781423 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 13 21:16:48.781485 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 13 21:16:48.781542 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 13 21:16:48.781613 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 13 21:16:48.781677 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 21:16:48.781739 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 21:16:48.781802 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.781855 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.781922 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.781983 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782044 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782095 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782155 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782226 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782280 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782342 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782395 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782445 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782530 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782596 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782649 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782702 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782765 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782821 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782873 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.782933 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.782988 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783038 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783100 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783164 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783224 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783275 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783330 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783389 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783446 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783540 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783596 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783649 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783703 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783774 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783839 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.783889 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.783944 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784004 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784054 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784126 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784180 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784230 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784306 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784383 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784464 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784579 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784632 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784683 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784746 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784798 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784848 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.784911 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.784981 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785034 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785083 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785137 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785187 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785239 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785292 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785344 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785395 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785447 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785602 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785658 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785718 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785769 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785819 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785879 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.785935 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.785986 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786039 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786101 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786169 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786227 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786278 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786332 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786386 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786437 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786507 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786560 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786614 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786676 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786745 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 13 21:16:48.786796 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 13 21:16:48.786859 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 21:16:48.786911 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 13 21:16:48.786966 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 13 21:16:48.787015 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 13 21:16:48.787068 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 21:16:48.787135 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 13 21:16:48.787196 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 13 21:16:48.787253 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 13 21:16:48.787321 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 13 21:16:48.787379 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 21:16:48.787447 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 13 21:16:48.787520 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 13 21:16:48.787573 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 13 21:16:48.787624 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 21:16:48.787701 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 13 21:16:48.787756 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 13 21:16:48.787811 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 13 21:16:48.787866 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 21:16:48.787939 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 13 21:16:48.788000 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 13 21:16:48.788067 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 21:16:48.788120 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 13 21:16:48.788171 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 13 21:16:48.788221 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 21:16:48.788278 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 13 21:16:48.788333 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 13 21:16:48.788394 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 21:16:48.788449 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 13 21:16:48.788524 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 13 21:16:48.788585 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 21:16:48.788638 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 13 21:16:48.788697 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 13 21:16:48.788748 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 21:16:48.788802 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 13 21:16:48.788867 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 13 21:16:48.788921 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 13 21:16:48.788984 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 13 21:16:48.789039 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 21:16:48.789105 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 13 21:16:48.789166 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 13 21:16:48.789224 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 13 21:16:48.789282 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 21:16:48.789334 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 13 21:16:48.789387 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 13 21:16:48.789448 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 13 21:16:48.790046 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 21:16:48.790116 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 13 21:16:48.790173 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 13 21:16:48.790232 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 21:16:48.790297 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 13 21:16:48.790366 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 13 21:16:48.790418 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 21:16:48.790605 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 13 21:16:48.790667 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 13 21:16:48.790725 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 21:16:48.790791 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 13 21:16:48.790855 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 13 21:16:48.790921 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 21:16:48.790974 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 13 21:16:48.791023 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 13 21:16:48.791083 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 21:16:48.791136 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 13 21:16:48.791189 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 13 21:16:48.791248 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 13 21:16:48.791302 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 21:16:48.791369 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 13 21:16:48.791431 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 13 21:16:48.791498 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 13 21:16:48.791549 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 21:16:48.791604 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 13 21:16:48.791660 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 13 21:16:48.791710 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 13 21:16:48.791766 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 21:16:48.791824 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 13 21:16:48.791884 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 13 21:16:48.791948 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 21:16:48.792014 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 13 21:16:48.792070 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 13 21:16:48.792120 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 21:16:48.792180 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 13 21:16:48.792233 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 13 21:16:48.792286 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 21:16:48.792348 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 13 21:16:48.792404 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 13 21:16:48.792482 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 21:16:48.792554 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 13 21:16:48.792614 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 13 21:16:48.792666 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 21:16:48.792721 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 13 21:16:48.792778 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 13 21:16:48.792832 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 13 21:16:48.792894 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 21:16:48.792952 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 13 21:16:48.793016 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 13 21:16:48.793078 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 13 21:16:48.793145 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 21:16:48.793199 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 13 21:16:48.793250 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 13 21:16:48.793309 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 21:16:48.793363 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 13 21:16:48.793417 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 13 21:16:48.793484 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 21:16:48.793545 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 13 21:16:48.793604 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 13 21:16:48.793661 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 21:16:48.793720 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 13 21:16:48.793775 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 13 21:16:48.793826 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 21:16:48.793883 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 13 21:16:48.793940 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 13 21:16:48.793995 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 21:16:48.794050 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 13 21:16:48.794114 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 13 21:16:48.794169 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 21:16:48.794225 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 13 21:16:48.794276 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 13 21:16:48.794330 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 13 21:16:48.794375 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 13 21:16:48.794422 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 13 21:16:48.795095 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 13 21:16:48.795155 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 13 21:16:48.795211 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 13 21:16:48.795270 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 13 21:16:48.795323 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 13 21:16:48.795376 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 13 21:16:48.795434 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 13 21:16:48.795845 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 13 21:16:48.795913 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 13 21:16:48.795971 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 13 21:16:48.796027 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 13 21:16:48.796083 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 13 21:16:48.796131 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 13 21:16:48.796178 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 13 21:16:48.796247 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 13 21:16:48.796296 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 13 21:16:48.796351 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 13 21:16:48.796407 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 13 21:16:48.796462 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 13 21:16:48.796535 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 13 21:16:48.796594 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 13 21:16:48.796650 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 13 21:16:48.796696 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 13 21:16:48.796747 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 13 21:16:48.796794 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 13 21:16:48.796857 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 13 21:16:48.796915 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 13 21:16:48.796973 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 13 21:16:48.797039 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 13 21:16:48.797101 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 13 21:16:48.797161 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 13 21:16:48.797212 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 13 21:16:48.797271 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 13 21:16:48.797326 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 13 21:16:48.797375 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 13 21:16:48.797442 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 13 21:16:48.797505 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 13 21:16:48.797561 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 13 21:16:48.797620 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 13 21:16:48.797679 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 13 21:16:48.797739 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 13 21:16:48.797792 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 13 21:16:48.797857 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 13 21:16:48.797914 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 13 21:16:48.797966 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 13 21:16:48.798027 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 13 21:16:48.798082 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 13 21:16:48.798130 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 13 21:16:48.798181 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 13 21:16:48.798236 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 13 21:16:48.798295 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 13 21:16:48.798350 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 13 21:16:48.798413 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 13 21:16:48.798463 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 13 21:16:48.798536 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 13 21:16:48.798603 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 13 21:16:48.798653 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 13 21:16:48.798713 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 13 21:16:48.798765 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 13 21:16:48.798820 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 13 21:16:48.798878 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 13 21:16:48.798932 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 13 21:16:48.798989 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 13 21:16:48.799040 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 13 21:16:48.799090 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 13 21:16:48.799152 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 13 21:16:48.799201 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 13 21:16:48.799257 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 13 21:16:48.799313 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 13 21:16:48.799369 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 13 21:16:48.799423 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 13 21:16:48.799551 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 13 21:16:48.799603 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 13 21:16:48.799656 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 13 21:16:48.799710 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 13 21:16:48.799766 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 13 21:16:48.799824 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 13 21:16:48.799883 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 13 21:16:48.799941 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 13 21:16:48.799994 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 13 21:16:48.800044 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 13 21:16:48.800103 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 13 21:16:48.800151 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 13 21:16:48.800209 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:16:48.800220 kernel: PCI: CLS 32 bytes, default 64 Jan 13 21:16:48.800230 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:16:48.800237 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 13 21:16:48.800248 kernel: clocksource: Switched to clocksource tsc Jan 13 21:16:48.800255 kernel: Initialise system trusted keyrings Jan 13 21:16:48.800262 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 21:16:48.800268 kernel: Key type asymmetric registered Jan 13 21:16:48.800276 kernel: Asymmetric key parser 'x509' registered Jan 13 21:16:48.800283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:16:48.800289 kernel: io scheduler mq-deadline registered Jan 13 21:16:48.800297 kernel: io scheduler kyber registered Jan 13 21:16:48.800303 kernel: io scheduler bfq registered Jan 13 21:16:48.800361 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 13 21:16:48.800423 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800495 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 13 21:16:48.800554 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800613 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 13 21:16:48.800676 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800729 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 13 21:16:48.800784 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800844 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 13 21:16:48.800905 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.800966 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 13 21:16:48.801025 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801091 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 13 21:16:48.801144 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801200 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 13 21:16:48.801262 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801316 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 13 21:16:48.801371 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801433 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 13 21:16:48.801498 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801710 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 13 21:16:48.801780 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801838 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 13 21:16:48.801893 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.801952 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 13 21:16:48.802015 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.802072 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 13 21:16:48.802137 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.802197 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 13 21:16:48.802252 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.802305 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 13 21:16:48.802357 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.802695 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 13 21:16:48.802764 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.803540 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 13 21:16:48.803602 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.803666 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 13 21:16:48.803729 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.803786 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 13 21:16:48.803849 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.803912 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 13 21:16:48.803974 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.804029 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 13 21:16:48.804081 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.804134 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 13 21:16:48.804192 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.804253 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 13 21:16:48.804315 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.804372 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 13 21:16:48.804438 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.805609 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 13 21:16:48.805675 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.805741 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 13 21:16:48.805797 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.805850 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 13 21:16:48.805907 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.805974 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 13 21:16:48.806033 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.806096 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 13 21:16:48.806165 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.806228 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 13 21:16:48.806286 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.806350 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 13 21:16:48.806402 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 13 21:16:48.806411 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:16:48.806418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:16:48.806425 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:16:48.806431 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 13 21:16:48.806438 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:16:48.806446 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:16:48.806528 kernel: rtc_cmos 00:01: registered as rtc0 Jan 13 21:16:48.806593 kernel: rtc_cmos 00:01: setting system clock to 2025-01-13T21:16:48 UTC (1736803008) Jan 13 21:16:48.806605 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:16:48.806654 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 13 21:16:48.806664 kernel: intel_pstate: CPU model not supported Jan 13 21:16:48.806671 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:16:48.806677 kernel: Segment Routing with IPv6 Jan 13 21:16:48.806686 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:16:48.806692 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:16:48.806701 kernel: Key type dns_resolver registered Jan 13 21:16:48.806708 kernel: IPI shorthand broadcast: enabled Jan 13 21:16:48.806718 kernel: sched_clock: Marking stable (915003407, 232819235)->(1211295718, -63473076) Jan 13 21:16:48.806726 kernel: registered taskstats version 1 Jan 13 21:16:48.806732 kernel: Loading compiled-in X.509 certificates Jan 13 21:16:48.806739 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:16:48.806745 kernel: Key type .fscrypt registered Jan 13 21:16:48.806753 kernel: Key type fscrypt-provisioning registered Jan 13 21:16:48.806759 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:16:48.806768 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:16:48.806775 kernel: ima: No architecture policies found Jan 13 21:16:48.806781 kernel: clk: Disabling unused clocks Jan 13 21:16:48.806787 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:16:48.806793 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:16:48.806800 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:16:48.806807 kernel: Run /init as init process Jan 13 21:16:48.806818 kernel: with arguments: Jan 13 21:16:48.806828 kernel: /init Jan 13 21:16:48.806836 kernel: with environment: Jan 13 21:16:48.806842 kernel: HOME=/ Jan 13 21:16:48.806848 kernel: TERM=linux Jan 13 21:16:48.806854 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:16:48.806864 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:16:48.806874 systemd[1]: Detected virtualization vmware. Jan 13 21:16:48.806885 systemd[1]: Detected architecture x86-64. Jan 13 21:16:48.806892 systemd[1]: Running in initrd. Jan 13 21:16:48.806898 systemd[1]: No hostname configured, using default hostname. Jan 13 21:16:48.806904 systemd[1]: Hostname set to . Jan 13 21:16:48.806911 systemd[1]: Initializing machine ID from random generator. Jan 13 21:16:48.806918 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:16:48.806925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:16:48.806931 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:16:48.806940 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:16:48.806946 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:16:48.806953 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:16:48.806959 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:16:48.806967 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:16:48.806974 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:16:48.806981 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:16:48.806988 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:16:48.806994 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:16:48.807002 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:16:48.807009 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:16:48.807015 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:16:48.807024 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:16:48.807032 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:16:48.807042 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:16:48.807051 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:16:48.807058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:16:48.807064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:16:48.807071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:16:48.807077 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:16:48.807083 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:16:48.807090 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:16:48.807096 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:16:48.807103 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:16:48.807110 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:16:48.807117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:16:48.807123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:16:48.807132 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:16:48.807154 systemd-journald[215]: Collecting audit messages is disabled. Jan 13 21:16:48.807172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:16:48.807182 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:16:48.807189 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:16:48.807202 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:16:48.807209 kernel: Bridge firewalling registered Jan 13 21:16:48.807215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:16:48.807222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:16:48.807231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:16:48.807237 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:16:48.807244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:16:48.807253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:16:48.807262 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:16:48.807269 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:16:48.807276 systemd-journald[215]: Journal started Jan 13 21:16:48.807292 systemd-journald[215]: Runtime Journal (/run/log/journal/f447d01e665249af8706910bbb270349) is 4.8M, max 38.6M, 33.8M free. Jan 13 21:16:48.756158 systemd-modules-load[216]: Inserted module 'overlay' Jan 13 21:16:48.772607 systemd-modules-load[216]: Inserted module 'br_netfilter' Jan 13 21:16:48.817224 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:16:48.817252 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:16:48.816953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:16:48.820134 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:16:48.824637 dracut-cmdline[238]: dracut-dracut-053 Jan 13 21:16:48.828382 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:16:48.829313 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:16:48.835821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:16:48.851696 systemd-resolved[264]: Positive Trust Anchors: Jan 13 21:16:48.851705 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:16:48.851728 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:16:48.853690 systemd-resolved[264]: Defaulting to hostname 'linux'. Jan 13 21:16:48.854223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:16:48.854356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:16:48.877484 kernel: SCSI subsystem initialized Jan 13 21:16:48.883481 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:16:48.891486 kernel: iscsi: registered transport (tcp) Jan 13 21:16:48.906655 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:16:48.906697 kernel: QLogic iSCSI HBA Driver Jan 13 21:16:48.929583 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:16:48.933693 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:16:48.948640 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:16:48.948673 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:16:48.949949 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:16:48.983487 kernel: raid6: avx2x4 gen() 40709 MB/s Jan 13 21:16:49.000493 kernel: raid6: avx2x2 gen() 40075 MB/s Jan 13 21:16:49.017663 kernel: raid6: avx2x1 gen() 39785 MB/s Jan 13 21:16:49.017685 kernel: raid6: using algorithm avx2x4 gen() 40709 MB/s Jan 13 21:16:49.035679 kernel: raid6: .... xor() 21642 MB/s, rmw enabled Jan 13 21:16:49.035708 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:16:49.049489 kernel: xor: automatically using best checksumming function avx Jan 13 21:16:49.152498 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:16:49.158008 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:16:49.162562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:16:49.170172 systemd-udevd[433]: Using default interface naming scheme 'v255'. Jan 13 21:16:49.172977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:16:49.176652 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:16:49.185278 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Jan 13 21:16:49.199865 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:16:49.201604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:16:49.275080 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:16:49.278563 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:16:49.294508 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:16:49.295058 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:16:49.295972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:16:49.296255 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:16:49.302644 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:16:49.310788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:16:49.350486 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 13 21:16:49.358488 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 13 21:16:49.359668 kernel: vmw_pvscsi: using 64bit dma Jan 13 21:16:49.359687 kernel: vmw_pvscsi: max_id: 16 Jan 13 21:16:49.359695 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 13 21:16:49.366873 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 13 21:16:49.382330 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:16:49.382343 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 13 21:16:49.382355 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 13 21:16:49.382364 kernel: vmw_pvscsi: using MSI-X Jan 13 21:16:49.382373 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 13 21:16:49.382490 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 13 21:16:49.382512 kernel: libata version 3.00 loaded. Jan 13 21:16:49.381140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:16:49.381237 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:16:49.381456 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:16:49.381577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:16:49.381655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:16:49.381784 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:16:49.386223 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 13 21:16:49.395655 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 13 21:16:49.395754 kernel: scsi host1: ata_piix Jan 13 21:16:49.395826 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 13 21:16:49.395904 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 13 21:16:49.395973 kernel: scsi host2: ata_piix Jan 13 21:16:49.396034 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 13 21:16:49.396047 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 13 21:16:49.396055 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:16:49.396062 kernel: AES CTR mode by8 optimization enabled Jan 13 21:16:49.386863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:16:49.406417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:16:49.411594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:16:49.424984 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:16:49.549490 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 13 21:16:49.564543 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 13 21:16:49.580510 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 13 21:16:49.586640 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 13 21:16:49.586717 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 13 21:16:49.586780 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 13 21:16:49.586841 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 13 21:16:49.586900 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:16:49.586909 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 13 21:16:49.588806 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 13 21:16:49.601328 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:16:49.601345 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:16:49.654487 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (492) Jan 13 21:16:49.656735 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 13 21:16:49.659520 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 13 21:16:49.661475 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (489) Jan 13 21:16:49.664406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 13 21:16:49.667799 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 13 21:16:49.668063 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 13 21:16:49.674113 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:16:49.724495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:16:49.728475 kernel: GPT:disk_guids don't match. Jan 13 21:16:49.728502 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:16:49.728510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:16:50.738510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:16:50.738776 disk-uuid[589]: The operation has completed successfully. Jan 13 21:16:50.787237 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:16:50.787305 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:16:50.792564 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:16:50.794599 sh[608]: Success Jan 13 21:16:50.802486 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:16:50.897157 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:16:50.903333 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:16:50.903686 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:16:50.924488 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:16:50.924526 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:16:50.924535 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:16:50.924542 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:16:50.924715 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:16:50.943487 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:16:50.957421 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:16:50.967606 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 13 21:16:50.968926 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:16:50.985619 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:50.985662 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:16:50.985671 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:16:51.012480 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:16:51.017667 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:16:51.019481 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:51.021242 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:16:51.029633 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:16:51.063297 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 13 21:16:51.068552 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:16:51.136391 ignition[668]: Ignition 2.19.0 Jan 13 21:16:51.136398 ignition[668]: Stage: fetch-offline Jan 13 21:16:51.136416 ignition[668]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.136421 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.136496 ignition[668]: parsed url from cmdline: "" Jan 13 21:16:51.136498 ignition[668]: no config URL provided Jan 13 21:16:51.136502 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:16:51.136507 ignition[668]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:16:51.136868 ignition[668]: config successfully fetched Jan 13 21:16:51.136884 ignition[668]: parsing config with SHA512: bd47f444a28c99b0c5641d0e125045736d2be2f4007d9c01d2edfe57f86013ccb59b242afb0c907ae4258660c8e4d986a984ed0284c78c49fbc670d1fe240b79 Jan 13 21:16:51.138967 unknown[668]: fetched base config from "system" Jan 13 21:16:51.139205 ignition[668]: fetch-offline: fetch-offline passed Jan 13 21:16:51.138973 unknown[668]: fetched user config from "vmware" Jan 13 21:16:51.139241 ignition[668]: Ignition finished successfully Jan 13 21:16:51.139909 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:16:51.146537 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:16:51.152633 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:16:51.164101 systemd-networkd[802]: lo: Link UP Jan 13 21:16:51.164108 systemd-networkd[802]: lo: Gained carrier Jan 13 21:16:51.164820 systemd-networkd[802]: Enumeration completed Jan 13 21:16:51.164997 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:16:51.165074 systemd-networkd[802]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 13 21:16:51.165149 systemd[1]: Reached target network.target - Network. Jan 13 21:16:51.165236 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:16:51.168580 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 13 21:16:51.168690 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 13 21:16:51.168308 systemd-networkd[802]: ens192: Link UP Jan 13 21:16:51.168310 systemd-networkd[802]: ens192: Gained carrier Jan 13 21:16:51.172603 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:16:51.180617 ignition[805]: Ignition 2.19.0 Jan 13 21:16:51.180625 ignition[805]: Stage: kargs Jan 13 21:16:51.180725 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.180731 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.181257 ignition[805]: kargs: kargs passed Jan 13 21:16:51.181283 ignition[805]: Ignition finished successfully Jan 13 21:16:51.182363 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:16:51.186581 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:16:51.193909 ignition[812]: Ignition 2.19.0 Jan 13 21:16:51.194256 ignition[812]: Stage: disks Jan 13 21:16:51.194530 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.194536 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.195450 ignition[812]: disks: disks passed Jan 13 21:16:51.195509 ignition[812]: Ignition finished successfully Jan 13 21:16:51.196463 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:16:51.196809 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:16:51.196920 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:16:51.197031 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:16:51.197126 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:16:51.197222 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:16:51.201569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:16:51.212387 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:16:51.213747 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:16:51.218546 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:16:51.313517 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:16:51.313754 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:16:51.314242 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:16:51.319531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:16:51.320521 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:16:51.321600 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:16:51.321632 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:16:51.321649 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:16:51.325303 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:16:51.326652 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:16:51.328388 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (829) Jan 13 21:16:51.328410 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:51.329739 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:16:51.329760 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:16:51.333863 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:16:51.334138 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:16:51.360873 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:16:51.363497 initrd-setup-root[860]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:16:51.365935 initrd-setup-root[867]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:16:51.368184 initrd-setup-root[874]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:16:51.428517 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:16:51.432596 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:16:51.435070 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:16:51.438550 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:51.452537 ignition[942]: INFO : Ignition 2.19.0 Jan 13 21:16:51.452537 ignition[942]: INFO : Stage: mount Jan 13 21:16:51.452537 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.452537 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.453828 ignition[942]: INFO : mount: mount passed Jan 13 21:16:51.453828 ignition[942]: INFO : Ignition finished successfully Jan 13 21:16:51.453151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:16:51.454016 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:16:51.457558 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:16:51.920956 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:16:51.926598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:16:51.936482 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (954) Jan 13 21:16:51.938664 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:16:51.938681 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:16:51.938689 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:16:51.942479 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:16:51.943121 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:16:51.955043 ignition[971]: INFO : Ignition 2.19.0 Jan 13 21:16:51.955313 ignition[971]: INFO : Stage: files Jan 13 21:16:51.955417 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:51.955417 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:51.955952 ignition[971]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:16:51.958311 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:16:51.958311 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:16:51.962763 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:16:51.962998 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:16:51.963350 unknown[971]: wrote ssh authorized keys file for user: core Jan 13 21:16:51.963764 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:16:51.964752 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:16:51.965264 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:16:52.006747 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:16:52.178800 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:16:52.178800 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:16:52.179284 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:16:52.180634 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:16:52.668316 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:16:52.943171 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:16:52.943171 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 13 21:16:52.943697 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 13 21:16:52.943697 ignition[971]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:16:52.943697 ignition[971]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 21:16:52.944178 ignition[971]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:16:52.983193 ignition[971]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:16:52.985492 ignition[971]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:16:52.985492 ignition[971]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:16:52.985492 ignition[971]: INFO : files: files passed Jan 13 21:16:52.985492 ignition[971]: INFO : Ignition finished successfully Jan 13 21:16:52.987020 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:16:52.990563 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:16:52.991670 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:16:52.992909 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:16:52.993081 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:16:52.997581 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:16:52.997581 initrd-setup-root-after-ignition[1001]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:16:52.998653 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:16:52.999392 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:16:52.999827 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:16:53.002575 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:16:53.013737 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:16:53.013792 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:16:53.014178 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:16:53.014311 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:16:53.014523 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:16:53.014934 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:16:53.023798 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:16:53.028550 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:16:53.033399 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:16:53.033566 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:16:53.033798 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:16:53.033986 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:16:53.034052 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:16:53.034400 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:16:53.034571 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:16:53.034747 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:16:53.034931 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:16:53.035127 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:16:53.035498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:16:53.035663 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:16:53.035867 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:16:53.036069 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:16:53.036253 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:16:53.036409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:16:53.036525 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:16:53.036784 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:16:53.037015 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:16:53.037193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:16:53.037232 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:16:53.037407 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:16:53.037465 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:16:53.037822 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:16:53.037886 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:16:53.038111 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:16:53.038240 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:16:53.043486 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:16:53.043665 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:16:53.043857 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:16:53.044047 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:16:53.044115 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:16:53.044299 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:16:53.044344 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:16:53.044583 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:16:53.044642 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:16:53.044883 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:16:53.044938 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:16:53.053559 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:16:53.053663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:16:53.053727 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:16:53.055294 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:16:53.055408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:16:53.055504 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:16:53.055784 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:16:53.055862 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:16:53.059240 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:16:53.059571 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:16:53.062474 ignition[1025]: INFO : Ignition 2.19.0 Jan 13 21:16:53.062474 ignition[1025]: INFO : Stage: umount Jan 13 21:16:53.062474 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:16:53.062474 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 13 21:16:53.065072 ignition[1025]: INFO : umount: umount passed Jan 13 21:16:53.065072 ignition[1025]: INFO : Ignition finished successfully Jan 13 21:16:53.064023 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:16:53.064082 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:16:53.064257 systemd[1]: Stopped target network.target - Network. Jan 13 21:16:53.064350 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:16:53.064375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:16:53.064500 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:16:53.064522 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:16:53.064666 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:16:53.064687 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:16:53.064820 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:16:53.064840 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:16:53.065050 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:16:53.065203 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:16:53.070046 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:16:53.070102 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:16:53.070463 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:16:53.070504 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:16:53.075221 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:16:53.075361 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:16:53.075390 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:16:53.075641 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 13 21:16:53.075665 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 13 21:16:53.075853 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:16:53.077416 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:16:53.077760 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:16:53.077810 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:16:53.079784 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:16:53.079833 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:16:53.080299 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:16:53.080322 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:16:53.080717 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:16:53.080895 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:16:53.083809 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:16:53.083873 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:16:53.090797 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:16:53.090877 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:16:53.091149 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:16:53.091173 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:16:53.091384 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:16:53.091400 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:16:53.091565 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:16:53.091587 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:16:53.091856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:16:53.091877 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:16:53.092316 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:16:53.092337 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:16:53.096547 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:16:53.096653 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:16:53.096679 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:16:53.096805 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:16:53.096828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:16:53.099352 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:16:53.099404 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:16:53.157986 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:16:53.158043 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:16:53.158402 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:16:53.158530 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:16:53.158558 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:16:53.161558 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:16:53.179563 systemd[1]: Switching root. Jan 13 21:16:53.230573 systemd-journald[215]: Journal stopped Jan 13 21:16:54.344240 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Jan 13 21:16:54.344269 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:16:54.344277 kernel: SELinux: policy capability open_perms=1 Jan 13 21:16:54.344283 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:16:54.344288 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:16:54.344293 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:16:54.344301 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:16:54.344307 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:16:54.344312 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:16:54.344318 systemd[1]: Successfully loaded SELinux policy in 33.060ms. Jan 13 21:16:54.344325 kernel: audit: type=1403 audit(1736803013.789:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:16:54.344331 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.855ms. Jan 13 21:16:54.344338 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:16:54.344346 systemd[1]: Detected virtualization vmware. Jan 13 21:16:54.344353 systemd[1]: Detected architecture x86-64. Jan 13 21:16:54.344359 systemd[1]: Detected first boot. Jan 13 21:16:54.344366 systemd[1]: Initializing machine ID from random generator. Jan 13 21:16:54.344374 zram_generator::config[1068]: No configuration found. Jan 13 21:16:54.344381 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:16:54.344388 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 21:16:54.344396 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jan 13 21:16:54.344402 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:16:54.344409 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:16:54.344415 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:16:54.344423 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:16:54.344435 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:16:54.344442 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:16:54.344448 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:16:54.344455 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:16:54.344462 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:16:54.344530 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:16:54.344540 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:16:54.344547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:16:54.344554 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:16:54.344561 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:16:54.344568 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:16:54.344574 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:16:54.344581 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:16:54.344588 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:16:54.344596 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:16:54.344604 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:16:54.344612 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:16:54.344619 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:16:54.344626 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:16:54.344633 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:16:54.344640 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:16:54.344647 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:16:54.344655 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:16:54.344662 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:16:54.344669 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:16:54.344676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:16:54.344683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:16:54.344692 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:16:54.344699 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:16:54.344706 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:16:54.344713 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:16:54.344720 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:16:54.344727 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:16:54.344734 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:16:54.344742 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:16:54.344750 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:16:54.344758 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:16:54.344765 systemd[1]: Reached target machines.target - Containers. Jan 13 21:16:54.344772 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:16:54.344779 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jan 13 21:16:54.344786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:16:54.344793 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:16:54.344800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:16:54.344809 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:16:54.344816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:16:54.344823 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:16:54.344830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:16:54.344837 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:16:54.344844 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:16:54.344851 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:16:54.344859 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:16:54.344866 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:16:54.344874 kernel: fuse: init (API version 7.39) Jan 13 21:16:54.344881 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:16:54.344888 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:16:54.344896 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:16:54.344903 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:16:54.344910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:16:54.344917 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:16:54.344924 systemd[1]: Stopped verity-setup.service. Jan 13 21:16:54.344932 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:16:54.344939 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:16:54.344946 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:16:54.344954 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:16:54.344961 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:16:54.344968 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:16:54.344975 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:16:54.344982 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:16:54.344989 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:16:54.344997 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:16:54.345019 systemd-journald[1151]: Collecting audit messages is disabled. Jan 13 21:16:54.345035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:16:54.345043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:16:54.345051 systemd-journald[1151]: Journal started Jan 13 21:16:54.345066 systemd-journald[1151]: Runtime Journal (/run/log/journal/4c6cbb58b2a64f67a395ddb3c36d930b) is 4.8M, max 38.6M, 33.8M free. Jan 13 21:16:54.165442 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:16:54.179798 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 21:16:54.180001 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:16:54.346816 jq[1135]: true Jan 13 21:16:54.347296 jq[1167]: true Jan 13 21:16:54.352149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:16:54.352170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:16:54.352180 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:16:54.352113 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:16:54.352383 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:16:54.352804 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:16:54.353802 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:16:54.355487 kernel: ACPI: bus type drm_connector registered Jan 13 21:16:54.354603 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:16:54.359785 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:16:54.359908 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:16:54.360481 kernel: loop: module loaded Jan 13 21:16:54.360986 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:16:54.361324 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:16:54.369252 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:16:54.373586 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:16:54.375689 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:16:54.375803 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:16:54.375822 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:16:54.378045 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:16:54.386570 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:16:54.391363 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:16:54.391543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:16:54.393285 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:16:54.396547 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:16:54.396675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:16:54.399240 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:16:54.399365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:16:54.401401 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:16:54.404341 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:16:54.405712 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:16:54.405992 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:16:54.406635 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:16:54.406858 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:16:54.415553 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:16:54.434245 systemd-journald[1151]: Time spent on flushing to /var/log/journal/4c6cbb58b2a64f67a395ddb3c36d930b is 80.786ms for 1834 entries. Jan 13 21:16:54.434245 systemd-journald[1151]: System Journal (/var/log/journal/4c6cbb58b2a64f67a395ddb3c36d930b) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:16:54.529134 systemd-journald[1151]: Received client request to flush runtime journal. Jan 13 21:16:54.529161 kernel: loop0: detected capacity change from 0 to 210664 Jan 13 21:16:54.529180 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:16:54.437861 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:16:54.468605 ignition[1175]: Ignition 2.19.0 Jan 13 21:16:54.438030 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:16:54.468930 ignition[1175]: deleting config from guestinfo properties Jan 13 21:16:54.443667 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:16:54.492493 ignition[1175]: Successfully deleted config Jan 13 21:16:54.459646 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:16:54.500694 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jan 13 21:16:54.515660 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:16:54.516302 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:16:54.525116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:16:54.531716 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:16:54.532387 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:16:54.536477 kernel: loop1: detected capacity change from 0 to 2976 Jan 13 21:16:54.538565 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:16:54.542603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:16:54.546419 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:16:54.562639 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 13 21:16:54.563319 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 13 21:16:54.567156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:16:54.575484 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:16:54.613559 kernel: loop3: detected capacity change from 0 to 140768 Jan 13 21:16:54.678831 kernel: loop4: detected capacity change from 0 to 210664 Jan 13 21:16:54.739493 kernel: loop5: detected capacity change from 0 to 2976 Jan 13 21:16:54.749600 kernel: loop6: detected capacity change from 0 to 142488 Jan 13 21:16:54.772848 kernel: loop7: detected capacity change from 0 to 140768 Jan 13 21:16:54.799863 (sd-merge)[1237]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jan 13 21:16:54.800122 (sd-merge)[1237]: Merged extensions into '/usr'. Jan 13 21:16:54.802903 systemd[1]: Reloading requested from client PID 1205 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:16:54.802912 systemd[1]: Reloading... Jan 13 21:16:54.860371 zram_generator::config[1259]: No configuration found. Jan 13 21:16:54.967181 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 21:16:54.983744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:16:55.020083 ldconfig[1200]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:16:55.022439 systemd[1]: Reloading finished in 219 ms. Jan 13 21:16:55.048244 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:16:55.048561 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:16:55.048824 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:16:55.055615 systemd[1]: Starting ensure-sysext.service... Jan 13 21:16:55.056550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:16:55.058560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:16:55.070542 systemd[1]: Reloading requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:16:55.070552 systemd[1]: Reloading... Jan 13 21:16:55.073645 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:16:55.075704 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:16:55.076206 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:16:55.076371 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 13 21:16:55.076407 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 13 21:16:55.081617 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Jan 13 21:16:55.083931 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:16:55.083988 systemd-tmpfiles[1321]: Skipping /boot Jan 13 21:16:55.090978 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:16:55.090985 systemd-tmpfiles[1321]: Skipping /boot Jan 13 21:16:55.118554 zram_generator::config[1355]: No configuration found. Jan 13 21:16:55.194479 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:16:55.200575 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:16:55.225193 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 21:16:55.244963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:16:55.268543 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1388) Jan 13 21:16:55.277915 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:16:55.278186 systemd[1]: Reloading finished in 207 ms. Jan 13 21:16:55.289292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:16:55.290151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:16:55.299532 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jan 13 21:16:55.314111 systemd[1]: Finished ensure-sysext.service. Jan 13 21:16:55.318707 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:16:55.324732 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:16:55.327485 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jan 13 21:16:55.342806 kernel: Guest personality initialized and is active Jan 13 21:16:55.329555 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:16:55.330254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:16:55.331737 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:16:55.333898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:16:55.337644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:16:55.337832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:16:55.339953 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:16:55.342766 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:16:55.345841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:16:55.356577 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:16:55.359493 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 13 21:16:55.359521 kernel: Initialized host personality Jan 13 21:16:55.367737 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:16:55.367916 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:16:55.371514 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:16:55.379808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:16:55.380463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:16:55.381224 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:16:55.381376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:16:55.384376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:16:55.384481 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:16:55.384723 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:16:55.384797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:16:55.385848 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 13 21:16:55.386768 augenrules[1464]: No rules Jan 13 21:16:55.387757 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:16:55.389616 (udev-worker)[1386]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jan 13 21:16:55.392011 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:16:55.395673 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:16:55.398482 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:16:55.401992 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:16:55.402759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:16:55.402853 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:16:55.404751 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:16:55.410648 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:16:55.413604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:16:55.414317 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:16:55.419111 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:16:55.425715 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:16:55.426013 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:16:55.436084 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:16:55.439557 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:16:55.455515 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:16:55.455759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:16:55.461548 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:16:55.462298 lvm[1496]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:16:55.474197 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:16:55.474432 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:16:55.486518 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:16:55.502298 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:16:55.502530 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:16:55.504110 systemd-networkd[1447]: lo: Link UP Jan 13 21:16:55.504114 systemd-networkd[1447]: lo: Gained carrier Jan 13 21:16:55.505086 systemd-networkd[1447]: Enumeration completed Jan 13 21:16:55.505173 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:16:55.509476 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 13 21:16:55.509622 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 13 21:16:55.506035 systemd-networkd[1447]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jan 13 21:16:55.506543 systemd-timesyncd[1452]: No network connectivity, watching for changes. Jan 13 21:16:55.510424 systemd-networkd[1447]: ens192: Link UP Jan 13 21:16:55.510539 systemd-networkd[1447]: ens192: Gained carrier Jan 13 21:16:55.510769 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:16:55.514699 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Jan 13 21:16:55.516059 systemd-resolved[1451]: Positive Trust Anchors: Jan 13 21:16:55.516216 systemd-resolved[1451]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:16:55.516268 systemd-resolved[1451]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:16:55.519529 systemd-resolved[1451]: Defaulting to hostname 'linux'. Jan 13 21:16:55.520674 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:16:55.522153 systemd[1]: Reached target network.target - Network. Jan 13 21:16:55.522288 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:16:55.522646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:16:55.523099 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:16:55.523335 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:16:55.523512 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:16:55.523760 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:16:55.523960 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:16:55.524104 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:16:55.524243 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:16:55.524261 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:16:55.524431 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:16:55.524871 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:16:55.526053 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:16:55.529589 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:16:55.530130 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:16:55.530267 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:16:55.530352 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:16:55.530459 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:16:55.530530 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:16:55.531294 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:16:55.534711 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:16:55.535848 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:16:55.538595 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:16:55.539631 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:16:55.540663 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:16:55.541613 jq[1509]: false Jan 13 21:16:55.547633 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:16:55.550598 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:16:55.560572 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:16:55.562007 extend-filesystems[1510]: Found loop4 Jan 13 21:16:55.562278 extend-filesystems[1510]: Found loop5 Jan 13 21:16:55.562406 extend-filesystems[1510]: Found loop6 Jan 13 21:16:55.562538 extend-filesystems[1510]: Found loop7 Jan 13 21:16:55.563133 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:16:55.563356 extend-filesystems[1510]: Found sda Jan 13 21:16:55.563452 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:16:55.563817 extend-filesystems[1510]: Found sda1 Jan 13 21:16:55.563817 extend-filesystems[1510]: Found sda2 Jan 13 21:16:55.563817 extend-filesystems[1510]: Found sda3 Jan 13 21:16:55.563817 extend-filesystems[1510]: Found usr Jan 13 21:16:55.563817 extend-filesystems[1510]: Found sda4 Jan 13 21:16:55.563817 extend-filesystems[1510]: Found sda6 Jan 13 21:16:55.563817 extend-filesystems[1510]: Found sda7 Jan 13 21:16:55.563817 extend-filesystems[1510]: Found sda9 Jan 13 21:16:55.563817 extend-filesystems[1510]: Checking size of /dev/sda9 Jan 13 21:16:55.569737 dbus-daemon[1508]: [system] SELinux support is enabled Jan 13 21:16:55.571548 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:16:55.573584 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:16:55.575275 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:16:55.577452 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jan 13 21:16:55.577876 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:16:55.585244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:16:55.585354 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:16:55.585523 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:16:55.585611 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:16:55.588115 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:16:55.588645 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:16:55.589542 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jan 13 21:16:55.595488 extend-filesystems[1510]: Old size kept for /dev/sda9 Jan 13 21:16:55.595488 extend-filesystems[1510]: Found sr0 Jan 13 21:16:55.598195 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:16:55.600481 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1386) Jan 13 21:16:55.600570 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:16:55.603720 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:16:55.603741 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:16:55.603914 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:16:55.603928 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:16:55.615419 jq[1528]: true Jan 13 21:16:55.615842 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:16:55.620223 update_engine[1525]: I20250113 21:16:55.620071 1525 main.cc:92] Flatcar Update Engine starting Jan 13 21:16:55.623207 unknown[1535]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jan 13 21:16:55.626458 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jan 13 21:16:55.627171 unknown[1535]: Core dump limit set to -1 Jan 13 21:16:55.633103 update_engine[1525]: I20250113 21:16:55.631975 1525 update_check_scheduler.cc:74] Next update check in 9m29s Jan 13 21:16:55.638518 kernel: NET: Registered PF_VSOCK protocol family Jan 13 21:16:55.640625 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:16:55.647676 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:16:55.652580 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jan 13 21:16:55.653336 jq[1548]: true Jan 13 21:16:55.664500 tar[1532]: linux-amd64/helm Jan 13 21:16:55.666856 systemd-logind[1522]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:16:55.667535 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:16:55.668052 systemd-logind[1522]: New seat seat0. Jan 13 21:16:55.670679 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:16:55.759132 locksmithd[1552]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:16:55.809656 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:16:55.810498 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:16:55.811033 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:16:55.832705 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:16:55.854937 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:16:55.863686 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:16:55.872667 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:16:55.872793 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:16:55.882738 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:16:55.894745 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:16:55.903026 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:16:55.906523 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:16:55.906749 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:16:55.972675 containerd[1541]: time="2025-01-13T21:16:55.972626601Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:16:55.991659 containerd[1541]: time="2025-01-13T21:16:55.991530550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:16:55.992602 containerd[1541]: time="2025-01-13T21:16:55.992585986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:16:55.992832 containerd[1541]: time="2025-01-13T21:16:55.992636326Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:16:55.992832 containerd[1541]: time="2025-01-13T21:16:55.992648882Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:16:55.992832 containerd[1541]: time="2025-01-13T21:16:55.992744422Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:16:55.992832 containerd[1541]: time="2025-01-13T21:16:55.992754123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:16:55.992832 containerd[1541]: time="2025-01-13T21:16:55.992791067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:16:55.992832 containerd[1541]: time="2025-01-13T21:16:55.992799447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:16:55.993004 containerd[1541]: time="2025-01-13T21:16:55.992993261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:16:55.993037 containerd[1541]: time="2025-01-13T21:16:55.993030793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:16:55.993070 containerd[1541]: time="2025-01-13T21:16:55.993062182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:16:55.993103 containerd[1541]: time="2025-01-13T21:16:55.993096530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:16:55.993304 containerd[1541]: time="2025-01-13T21:16:55.993167557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:16:55.993304 containerd[1541]: time="2025-01-13T21:16:55.993288394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:16:55.993410 containerd[1541]: time="2025-01-13T21:16:55.993399748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:16:55.993447 containerd[1541]: time="2025-01-13T21:16:55.993439890Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:16:55.993522 containerd[1541]: time="2025-01-13T21:16:55.993514026Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:16:55.993582 containerd[1541]: time="2025-01-13T21:16:55.993574142Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.998717320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.998742079Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.998752908Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.998765106Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.998773540Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.998839927Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.998975233Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.999029049Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.999038721Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.999046628Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.999054295Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.999061837Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.999068829Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:16:56.000433 containerd[1541]: time="2025-01-13T21:16:55.999076083Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:16:56.000267 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999084354Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999091586Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999098180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999104564Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999115419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999122788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999129832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999137233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999145076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999152916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999159225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999166335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999173052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000712 containerd[1541]: time="2025-01-13T21:16:55.999181095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999187708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999194395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999201655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999210383Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999221058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999227680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999237536Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999261100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999271290Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999278005Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999284676Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999289999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999296996Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:16:56.000892 containerd[1541]: time="2025-01-13T21:16:55.999305353Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:16:56.001067 containerd[1541]: time="2025-01-13T21:16:55.999313163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:55.999481683Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:55.999517565Z" level=info msg="Connect containerd service" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:55.999542633Z" level=info msg="using legacy CRI server" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:55.999547026Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:55.999593457Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:55.999903480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:56.000047639Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:56.000072137Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:56.000116040Z" level=info msg="Start subscribing containerd event" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:56.000135581Z" level=info msg="Start recovering state" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:56.000166001Z" level=info msg="Start event monitor" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:56.000174317Z" level=info msg="Start snapshots syncer" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:56.000178834Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:56.000182568Z" level=info msg="Start streaming server" Jan 13 21:16:56.001081 containerd[1541]: time="2025-01-13T21:16:56.000212098Z" level=info msg="containerd successfully booted in 0.028120s" Jan 13 21:16:56.112123 tar[1532]: linux-amd64/LICENSE Jan 13 21:16:56.112123 tar[1532]: linux-amd64/README.md Jan 13 21:16:56.123905 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:16:57.003604 systemd-networkd[1447]: ens192: Gained IPv6LL Jan 13 21:16:57.003974 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Jan 13 21:16:57.005159 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:16:57.006013 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:16:57.010743 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jan 13 21:16:57.013044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:16:57.016541 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:16:57.049000 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:16:57.051615 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:16:57.051799 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jan 13 21:16:57.052240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:16:57.758307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:16:57.758719 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:16:57.760493 systemd[1]: Startup finished in 1.004s (kernel) + 5.153s (initrd) + 4.002s (userspace) = 10.161s. Jan 13 21:16:57.761356 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:16:57.787452 login[1616]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:16:57.788096 login[1612]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:16:57.793478 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:16:57.797657 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:16:57.799742 systemd-logind[1522]: New session 2 of user core. Jan 13 21:16:57.803692 systemd-logind[1522]: New session 1 of user core. Jan 13 21:16:57.808851 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:16:57.813624 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:16:57.815150 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:16:57.911042 systemd[1694]: Queued start job for default target default.target. Jan 13 21:16:57.921339 systemd[1694]: Created slice app.slice - User Application Slice. Jan 13 21:16:57.921358 systemd[1694]: Reached target paths.target - Paths. Jan 13 21:16:57.921367 systemd[1694]: Reached target timers.target - Timers. Jan 13 21:16:57.924548 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:16:57.928518 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:16:57.928865 systemd[1694]: Reached target sockets.target - Sockets. Jan 13 21:16:57.928875 systemd[1694]: Reached target basic.target - Basic System. Jan 13 21:16:57.928897 systemd[1694]: Reached target default.target - Main User Target. Jan 13 21:16:57.928914 systemd[1694]: Startup finished in 110ms. Jan 13 21:16:57.929226 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:16:57.936585 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:16:57.937194 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:16:58.392406 kubelet[1687]: E0113 21:16:58.392370 1687 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:16:58.394050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:16:58.394251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:17:08.491052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:17:08.499709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:17:08.557417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:17:08.559760 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:17:08.623167 kubelet[1739]: E0113 21:17:08.623134 1739 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:17:08.625618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:17:08.625758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:17:18.741007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:17:18.752646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:17:18.818579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:17:18.819753 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:17:18.887617 kubelet[1755]: E0113 21:17:18.887578 1755 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:17:18.889028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:17:18.889129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:18:39.618137 systemd-resolved[1451]: Clock change detected. Flushing caches. Jan 13 21:18:39.618155 systemd-timesyncd[1452]: Contacted time server 162.159.200.1:123 (2.flatcar.pool.ntp.org). Jan 13 21:18:39.618185 systemd-timesyncd[1452]: Initial clock synchronization to Mon 2025-01-13 21:18:39.618059 UTC. Jan 13 21:18:41.062942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:18:41.073570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:41.299851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:41.302953 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:18:41.337939 kubelet[1772]: E0113 21:18:41.337877 1772 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:18:41.339329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:18:41.339471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:18:47.914797 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:18:47.915697 systemd[1]: Started sshd@0-139.178.70.104:22-139.178.68.195:40362.service - OpenSSH per-connection server daemon (139.178.68.195:40362). Jan 13 21:18:47.947434 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 40362 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:18:47.948286 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:47.952423 systemd-logind[1522]: New session 3 of user core. Jan 13 21:18:47.962547 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:18:48.014596 systemd[1]: Started sshd@1-139.178.70.104:22-139.178.68.195:40376.service - OpenSSH per-connection server daemon (139.178.68.195:40376). Jan 13 21:18:48.039995 sshd[1786]: Accepted publickey for core from 139.178.68.195 port 40376 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:18:48.040723 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:48.044050 systemd-logind[1522]: New session 4 of user core. Jan 13 21:18:48.047454 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:18:48.095162 sshd[1786]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:48.100180 systemd[1]: sshd@1-139.178.70.104:22-139.178.68.195:40376.service: Deactivated successfully. Jan 13 21:18:48.100906 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:18:48.101321 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:18:48.102233 systemd[1]: Started sshd@2-139.178.70.104:22-139.178.68.195:40378.service - OpenSSH per-connection server daemon (139.178.68.195:40378). Jan 13 21:18:48.103182 systemd-logind[1522]: Removed session 4. Jan 13 21:18:48.132660 sshd[1793]: Accepted publickey for core from 139.178.68.195 port 40378 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:18:48.133298 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:48.135827 systemd-logind[1522]: New session 5 of user core. Jan 13 21:18:48.144446 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:18:48.190913 sshd[1793]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:48.199115 systemd[1]: sshd@2-139.178.70.104:22-139.178.68.195:40378.service: Deactivated successfully. Jan 13 21:18:48.200082 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:18:48.200543 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:18:48.204565 systemd[1]: Started sshd@3-139.178.70.104:22-139.178.68.195:40390.service - OpenSSH per-connection server daemon (139.178.68.195:40390). Jan 13 21:18:48.205778 systemd-logind[1522]: Removed session 5. Jan 13 21:18:48.230672 sshd[1800]: Accepted publickey for core from 139.178.68.195 port 40390 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:18:48.231491 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:48.234266 systemd-logind[1522]: New session 6 of user core. Jan 13 21:18:48.246541 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:18:48.297895 sshd[1800]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:48.306121 systemd[1]: sshd@3-139.178.70.104:22-139.178.68.195:40390.service: Deactivated successfully. Jan 13 21:18:48.307083 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:18:48.308076 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:18:48.308997 systemd[1]: Started sshd@4-139.178.70.104:22-139.178.68.195:40396.service - OpenSSH per-connection server daemon (139.178.68.195:40396). Jan 13 21:18:48.310699 systemd-logind[1522]: Removed session 6. Jan 13 21:18:48.341790 sshd[1807]: Accepted publickey for core from 139.178.68.195 port 40396 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:18:48.342627 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:48.346485 systemd-logind[1522]: New session 7 of user core. Jan 13 21:18:48.353549 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:18:48.411565 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:18:48.411769 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:18:48.422132 sudo[1810]: pam_unix(sudo:session): session closed for user root Jan 13 21:18:48.423269 sshd[1807]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:48.433463 systemd[1]: sshd@4-139.178.70.104:22-139.178.68.195:40396.service: Deactivated successfully. Jan 13 21:18:48.434552 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:18:48.435636 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:18:48.436879 systemd[1]: Started sshd@5-139.178.70.104:22-139.178.68.195:40404.service - OpenSSH per-connection server daemon (139.178.68.195:40404). Jan 13 21:18:48.437821 systemd-logind[1522]: Removed session 7. Jan 13 21:18:48.463961 sshd[1815]: Accepted publickey for core from 139.178.68.195 port 40404 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:18:48.465082 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:48.467621 systemd-logind[1522]: New session 8 of user core. Jan 13 21:18:48.477531 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:18:48.527485 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:18:48.527689 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:18:48.529991 sudo[1819]: pam_unix(sudo:session): session closed for user root Jan 13 21:18:48.533523 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:18:48.533720 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:18:48.544680 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:18:48.545435 auditctl[1822]: No rules Jan 13 21:18:48.545662 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:18:48.545812 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:18:48.547582 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:18:48.569055 augenrules[1840]: No rules Jan 13 21:18:48.569395 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:18:48.570062 sudo[1818]: pam_unix(sudo:session): session closed for user root Jan 13 21:18:48.571537 sshd[1815]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:48.579086 systemd[1]: sshd@5-139.178.70.104:22-139.178.68.195:40404.service: Deactivated successfully. Jan 13 21:18:48.579840 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:18:48.580226 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:18:48.583540 systemd[1]: Started sshd@6-139.178.70.104:22-139.178.68.195:40418.service - OpenSSH per-connection server daemon (139.178.68.195:40418). Jan 13 21:18:48.584438 systemd-logind[1522]: Removed session 8. Jan 13 21:18:48.605015 sshd[1848]: Accepted publickey for core from 139.178.68.195 port 40418 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:18:48.605860 sshd[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:48.609188 systemd-logind[1522]: New session 9 of user core. Jan 13 21:18:48.623559 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:18:48.673059 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:18:48.673280 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:18:48.944602 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:18:48.944703 (dockerd)[1866]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:18:49.192706 dockerd[1866]: time="2025-01-13T21:18:49.192671508Z" level=info msg="Starting up" Jan 13 21:18:49.269349 dockerd[1866]: time="2025-01-13T21:18:49.269169199Z" level=info msg="Loading containers: start." Jan 13 21:18:49.336461 kernel: Initializing XFRM netlink socket Jan 13 21:18:49.385006 systemd-networkd[1447]: docker0: Link UP Jan 13 21:18:49.396014 dockerd[1866]: time="2025-01-13T21:18:49.395993562Z" level=info msg="Loading containers: done." Jan 13 21:18:49.403127 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3159830500-merged.mount: Deactivated successfully. Jan 13 21:18:49.403715 dockerd[1866]: time="2025-01-13T21:18:49.403688830Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:18:49.403758 dockerd[1866]: time="2025-01-13T21:18:49.403748829Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:18:49.403818 dockerd[1866]: time="2025-01-13T21:18:49.403804898Z" level=info msg="Daemon has completed initialization" Jan 13 21:18:49.420164 dockerd[1866]: time="2025-01-13T21:18:49.420138275Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:18:49.420626 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:18:50.202214 containerd[1541]: time="2025-01-13T21:18:50.202188419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:18:50.858351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629650921.mount: Deactivated successfully. Jan 13 21:18:51.562834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 21:18:51.573488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:51.624120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:51.626486 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:18:51.649509 kubelet[2069]: E0113 21:18:51.649454 2069 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:18:51.650813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:18:51.650901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:18:52.474455 containerd[1541]: time="2025-01-13T21:18:52.474351023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:52.476662 containerd[1541]: time="2025-01-13T21:18:52.476636320Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Jan 13 21:18:52.478737 containerd[1541]: time="2025-01-13T21:18:52.478713769Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:52.481183 containerd[1541]: time="2025-01-13T21:18:52.481148912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:52.481842 containerd[1541]: time="2025-01-13T21:18:52.481648366Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.279434705s" Jan 13 21:18:52.481842 containerd[1541]: time="2025-01-13T21:18:52.481669499Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 21:18:52.493973 containerd[1541]: time="2025-01-13T21:18:52.493901474Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:18:52.675439 update_engine[1525]: I20250113 21:18:52.675396 1525 update_attempter.cc:509] Updating boot flags... Jan 13 21:18:52.706425 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2094) Jan 13 21:18:52.743001 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2094) Jan 13 21:18:54.054056 containerd[1541]: time="2025-01-13T21:18:54.054028145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:54.054433 containerd[1541]: time="2025-01-13T21:18:54.054404042Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Jan 13 21:18:54.055024 containerd[1541]: time="2025-01-13T21:18:54.055010527Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:54.056707 containerd[1541]: time="2025-01-13T21:18:54.056691388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:54.057466 containerd[1541]: time="2025-01-13T21:18:54.057447695Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.563527779s" Jan 13 21:18:54.057500 containerd[1541]: time="2025-01-13T21:18:54.057465544Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 21:18:54.070154 containerd[1541]: time="2025-01-13T21:18:54.070127790Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:18:55.606531 containerd[1541]: time="2025-01-13T21:18:55.606498080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:55.610756 containerd[1541]: time="2025-01-13T21:18:55.610604810Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Jan 13 21:18:55.611920 containerd[1541]: time="2025-01-13T21:18:55.611889710Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:55.613411 containerd[1541]: time="2025-01-13T21:18:55.613381050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:55.615389 containerd[1541]: time="2025-01-13T21:18:55.614927139Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.544778989s" Jan 13 21:18:55.615389 containerd[1541]: time="2025-01-13T21:18:55.614945019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 21:18:55.630038 containerd[1541]: time="2025-01-13T21:18:55.630020283Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:18:57.514861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955767472.mount: Deactivated successfully. Jan 13 21:18:57.865544 containerd[1541]: time="2025-01-13T21:18:57.865514849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:57.866131 containerd[1541]: time="2025-01-13T21:18:57.866111925Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 21:18:57.866622 containerd[1541]: time="2025-01-13T21:18:57.866602635Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:57.867639 containerd[1541]: time="2025-01-13T21:18:57.867607506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:57.868050 containerd[1541]: time="2025-01-13T21:18:57.868030917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.237890657s" Jan 13 21:18:57.868092 containerd[1541]: time="2025-01-13T21:18:57.868051442Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:18:57.884290 containerd[1541]: time="2025-01-13T21:18:57.884262972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:18:58.479662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount944607117.mount: Deactivated successfully. Jan 13 21:18:59.490386 containerd[1541]: time="2025-01-13T21:18:59.490306566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:59.503387 containerd[1541]: time="2025-01-13T21:18:59.503338639Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:18:59.517768 containerd[1541]: time="2025-01-13T21:18:59.517709295Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:59.532404 containerd[1541]: time="2025-01-13T21:18:59.532319921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:59.533225 containerd[1541]: time="2025-01-13T21:18:59.532896534Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.64860405s" Jan 13 21:18:59.533225 containerd[1541]: time="2025-01-13T21:18:59.532924365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:18:59.551597 containerd[1541]: time="2025-01-13T21:18:59.551461884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:19:00.378792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145618139.mount: Deactivated successfully. Jan 13 21:19:00.418385 containerd[1541]: time="2025-01-13T21:19:00.418259082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:00.422931 containerd[1541]: time="2025-01-13T21:19:00.422907419Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:19:00.427040 containerd[1541]: time="2025-01-13T21:19:00.427014307Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:00.429786 containerd[1541]: time="2025-01-13T21:19:00.429756990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:00.430472 containerd[1541]: time="2025-01-13T21:19:00.430216287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 878.726898ms" Jan 13 21:19:00.430472 containerd[1541]: time="2025-01-13T21:19:00.430234711Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:19:00.443457 containerd[1541]: time="2025-01-13T21:19:00.443014644Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:19:01.184063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897806549.mount: Deactivated successfully. Jan 13 21:19:01.812802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 21:19:01.822535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:19:02.796055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:19:02.799302 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:19:02.986231 kubelet[2240]: E0113 21:19:02.986193 2240 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:19:02.987857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:19:02.987956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:19:03.870999 containerd[1541]: time="2025-01-13T21:19:03.870955474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:03.882566 containerd[1541]: time="2025-01-13T21:19:03.882528451Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 13 21:19:03.889912 containerd[1541]: time="2025-01-13T21:19:03.889875631Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:03.899217 containerd[1541]: time="2025-01-13T21:19:03.899177964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:03.900068 containerd[1541]: time="2025-01-13T21:19:03.899980644Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.456912255s" Jan 13 21:19:03.900068 containerd[1541]: time="2025-01-13T21:19:03.900004028Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 21:19:06.045832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:19:06.051569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:19:06.063661 systemd[1]: Reloading requested from client PID 2312 ('systemctl') (unit session-9.scope)... Jan 13 21:19:06.063675 systemd[1]: Reloading... Jan 13 21:19:06.135391 zram_generator::config[2349]: No configuration found. Jan 13 21:19:06.189847 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 21:19:06.204837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:19:06.247570 systemd[1]: Reloading finished in 183 ms. Jan 13 21:19:06.272798 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:19:06.272844 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:19:06.272962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:19:06.277584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:19:06.577403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:19:06.586657 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:19:06.642163 kubelet[2417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:19:06.642163 kubelet[2417]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:19:06.642163 kubelet[2417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:19:06.654946 kubelet[2417]: I0113 21:19:06.654526 2417 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:19:06.985732 kubelet[2417]: I0113 21:19:06.985664 2417 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:19:06.985732 kubelet[2417]: I0113 21:19:06.985684 2417 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:19:06.985827 kubelet[2417]: I0113 21:19:06.985823 2417 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:19:07.062154 kubelet[2417]: I0113 21:19:07.062125 2417 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:19:07.076594 kubelet[2417]: E0113 21:19:07.076556 2417 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:07.099285 kubelet[2417]: I0113 21:19:07.099269 2417 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:19:07.115055 kubelet[2417]: I0113 21:19:07.115006 2417 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:19:07.120612 kubelet[2417]: I0113 21:19:07.115037 2417 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:19:07.125771 kubelet[2417]: I0113 21:19:07.125749 2417 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:19:07.125771 kubelet[2417]: I0113 21:19:07.125768 2417 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:19:07.125888 kubelet[2417]: I0113 21:19:07.125871 2417 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:19:07.130264 kubelet[2417]: I0113 21:19:07.130247 2417 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:19:07.130264 kubelet[2417]: I0113 21:19:07.130263 2417 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:19:07.130332 kubelet[2417]: I0113 21:19:07.130285 2417 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:19:07.130332 kubelet[2417]: I0113 21:19:07.130306 2417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:19:07.139575 kubelet[2417]: W0113 21:19:07.139289 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:07.139575 kubelet[2417]: E0113 21:19:07.139326 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:07.139829 kubelet[2417]: W0113 21:19:07.139803 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:07.139887 kubelet[2417]: E0113 21:19:07.139879 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:07.140156 kubelet[2417]: I0113 21:19:07.139978 2417 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:19:07.142287 kubelet[2417]: I0113 21:19:07.141577 2417 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:19:07.142287 kubelet[2417]: W0113 21:19:07.141625 2417 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:19:07.142287 kubelet[2417]: I0113 21:19:07.142203 2417 server.go:1264] "Started kubelet" Jan 13 21:19:07.143554 kubelet[2417]: I0113 21:19:07.143510 2417 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:19:07.149448 kubelet[2417]: I0113 21:19:07.149161 2417 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:19:07.150687 kubelet[2417]: I0113 21:19:07.150645 2417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:19:07.150921 kubelet[2417]: I0113 21:19:07.150910 2417 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:19:07.151445 kubelet[2417]: E0113 21:19:07.151052 2417 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d42a630be69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:19:07.142184553 +0000 UTC m=+0.553067122,LastTimestamp:2025-01-13 21:19:07.142184553 +0000 UTC m=+0.553067122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:19:07.153381 kubelet[2417]: I0113 21:19:07.151590 2417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:19:07.155340 kubelet[2417]: I0113 21:19:07.155327 2417 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:19:07.159500 kubelet[2417]: I0113 21:19:07.159487 2417 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:19:07.159615 kubelet[2417]: I0113 21:19:07.159608 2417 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:19:07.159949 kubelet[2417]: W0113 21:19:07.159926 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:07.160006 kubelet[2417]: E0113 21:19:07.159999 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:07.160849 kubelet[2417]: I0113 21:19:07.160836 2417 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:19:07.160956 kubelet[2417]: I0113 21:19:07.160945 2417 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:19:07.161225 kubelet[2417]: E0113 21:19:07.161211 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="200ms" Jan 13 21:19:07.187520 kubelet[2417]: I0113 21:19:07.187504 2417 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:19:07.207291 kubelet[2417]: E0113 21:19:07.207270 2417 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:19:07.213320 kubelet[2417]: I0113 21:19:07.213285 2417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:19:07.216054 kubelet[2417]: I0113 21:19:07.215855 2417 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:19:07.216054 kubelet[2417]: I0113 21:19:07.215865 2417 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:19:07.216054 kubelet[2417]: I0113 21:19:07.215879 2417 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:19:07.217785 kubelet[2417]: I0113 21:19:07.217503 2417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:19:07.217785 kubelet[2417]: I0113 21:19:07.217531 2417 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:19:07.217785 kubelet[2417]: I0113 21:19:07.217546 2417 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:19:07.217785 kubelet[2417]: E0113 21:19:07.217592 2417 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:19:07.218790 kubelet[2417]: W0113 21:19:07.218759 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:07.220061 kubelet[2417]: E0113 21:19:07.218835 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:07.222462 kubelet[2417]: I0113 21:19:07.222342 2417 policy_none.go:49] "None policy: Start" Jan 13 21:19:07.222715 kubelet[2417]: I0113 21:19:07.222700 2417 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:19:07.222715 kubelet[2417]: I0113 21:19:07.222716 2417 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:19:07.230815 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:19:07.249440 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:19:07.254900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:19:07.261634 kubelet[2417]: I0113 21:19:07.261598 2417 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:19:07.261879 kubelet[2417]: E0113 21:19:07.261861 2417 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 13 21:19:07.263228 kubelet[2417]: I0113 21:19:07.263057 2417 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:19:07.263228 kubelet[2417]: I0113 21:19:07.263181 2417 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:19:07.263294 kubelet[2417]: I0113 21:19:07.263265 2417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:19:07.265401 kubelet[2417]: E0113 21:19:07.265388 2417 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:19:07.318172 kubelet[2417]: I0113 21:19:07.318134 2417 topology_manager.go:215] "Topology Admit Handler" podUID="952579084e0b77844e6e744ba63711d7" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:19:07.318987 kubelet[2417]: I0113 21:19:07.318976 2417 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:19:07.319473 kubelet[2417]: I0113 21:19:07.319464 2417 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:19:07.329104 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Jan 13 21:19:07.338497 systemd[1]: Created slice kubepods-burstable-pod952579084e0b77844e6e744ba63711d7.slice - libcontainer container kubepods-burstable-pod952579084e0b77844e6e744ba63711d7.slice. Jan 13 21:19:07.354553 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Jan 13 21:19:07.361898 kubelet[2417]: E0113 21:19:07.361869 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="400ms" Jan 13 21:19:07.460755 kubelet[2417]: I0113 21:19:07.460535 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/952579084e0b77844e6e744ba63711d7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"952579084e0b77844e6e744ba63711d7\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:07.460755 kubelet[2417]: I0113 21:19:07.460572 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:07.460755 kubelet[2417]: I0113 21:19:07.460596 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:07.460755 kubelet[2417]: I0113 21:19:07.460609 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:07.460755 kubelet[2417]: I0113 21:19:07.460622 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:07.461003 kubelet[2417]: I0113 21:19:07.460634 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/952579084e0b77844e6e744ba63711d7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"952579084e0b77844e6e744ba63711d7\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:07.461003 kubelet[2417]: I0113 21:19:07.460644 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/952579084e0b77844e6e744ba63711d7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"952579084e0b77844e6e744ba63711d7\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:07.461003 kubelet[2417]: I0113 21:19:07.460654 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:07.461003 kubelet[2417]: I0113 21:19:07.460665 2417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:19:07.463118 kubelet[2417]: I0113 21:19:07.463100 2417 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:19:07.463320 kubelet[2417]: E0113 21:19:07.463302 2417 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 13 21:19:07.636731 containerd[1541]: time="2025-01-13T21:19:07.636680283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:07.644728 kubelet[2417]: E0113 21:19:07.644632 2417 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d42a630be69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:19:07.142184553 +0000 UTC m=+0.553067122,LastTimestamp:2025-01-13 21:19:07.142184553 +0000 UTC m=+0.553067122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:19:07.653476 containerd[1541]: time="2025-01-13T21:19:07.653159284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:952579084e0b77844e6e744ba63711d7,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:07.657003 containerd[1541]: time="2025-01-13T21:19:07.656972664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:07.762446 kubelet[2417]: E0113 21:19:07.762403 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="800ms" Jan 13 21:19:07.864447 kubelet[2417]: I0113 21:19:07.864421 2417 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:19:07.864675 kubelet[2417]: E0113 21:19:07.864656 2417 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 13 21:19:08.043772 kubelet[2417]: W0113 21:19:08.043703 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:08.043772 kubelet[2417]: E0113 21:19:08.043741 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:08.178177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67898709.mount: Deactivated successfully. Jan 13 21:19:08.180545 containerd[1541]: time="2025-01-13T21:19:08.180505992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:19:08.181615 containerd[1541]: time="2025-01-13T21:19:08.181571803Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:19:08.182328 containerd[1541]: time="2025-01-13T21:19:08.182297678Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:19:08.184727 containerd[1541]: time="2025-01-13T21:19:08.184699141Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:19:08.185995 containerd[1541]: time="2025-01-13T21:19:08.185949875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:19:08.186888 containerd[1541]: time="2025-01-13T21:19:08.186861339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:19:08.187729 containerd[1541]: time="2025-01-13T21:19:08.187691015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 534.459061ms" Jan 13 21:19:08.190378 containerd[1541]: time="2025-01-13T21:19:08.188475723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:19:08.190378 containerd[1541]: time="2025-01-13T21:19:08.188906330Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:19:08.192341 containerd[1541]: time="2025-01-13T21:19:08.192314879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.572239ms" Jan 13 21:19:08.198057 containerd[1541]: time="2025-01-13T21:19:08.198028721Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.010175ms" Jan 13 21:19:08.330443 containerd[1541]: time="2025-01-13T21:19:08.330182721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:08.330443 containerd[1541]: time="2025-01-13T21:19:08.330217968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:08.330443 containerd[1541]: time="2025-01-13T21:19:08.330226865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:08.330443 containerd[1541]: time="2025-01-13T21:19:08.330291747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:08.331198 containerd[1541]: time="2025-01-13T21:19:08.330940856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:08.331198 containerd[1541]: time="2025-01-13T21:19:08.331100105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:08.331198 containerd[1541]: time="2025-01-13T21:19:08.331118047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:08.331814 containerd[1541]: time="2025-01-13T21:19:08.331358377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:08.335818 containerd[1541]: time="2025-01-13T21:19:08.335235404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:08.335818 containerd[1541]: time="2025-01-13T21:19:08.335290312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:08.335818 containerd[1541]: time="2025-01-13T21:19:08.335302824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:08.335818 containerd[1541]: time="2025-01-13T21:19:08.335347959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:08.352473 systemd[1]: Started cri-containerd-0a02b2ce79f68a67b494b68b81249ef8974f2bc669b61c7ef49db2f8073d6160.scope - libcontainer container 0a02b2ce79f68a67b494b68b81249ef8974f2bc669b61c7ef49db2f8073d6160. Jan 13 21:19:08.356416 systemd[1]: Started cri-containerd-5b108b48e8411d8ea44676c246130e662bb5eabc1a8412d9a663db2ebc10b92a.scope - libcontainer container 5b108b48e8411d8ea44676c246130e662bb5eabc1a8412d9a663db2ebc10b92a. Jan 13 21:19:08.358044 systemd[1]: Started cri-containerd-969e354872e3cedc98318cefb1bd44199386086141eedcf9efa7ccbafcbc5b85.scope - libcontainer container 969e354872e3cedc98318cefb1bd44199386086141eedcf9efa7ccbafcbc5b85. Jan 13 21:19:08.401738 containerd[1541]: time="2025-01-13T21:19:08.401641270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b108b48e8411d8ea44676c246130e662bb5eabc1a8412d9a663db2ebc10b92a\"" Jan 13 21:19:08.402016 containerd[1541]: time="2025-01-13T21:19:08.402003211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"969e354872e3cedc98318cefb1bd44199386086141eedcf9efa7ccbafcbc5b85\"" Jan 13 21:19:08.406346 containerd[1541]: time="2025-01-13T21:19:08.406315731Z" level=info msg="CreateContainer within sandbox \"5b108b48e8411d8ea44676c246130e662bb5eabc1a8412d9a663db2ebc10b92a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:19:08.406625 containerd[1541]: time="2025-01-13T21:19:08.406535551Z" level=info msg="CreateContainer within sandbox \"969e354872e3cedc98318cefb1bd44199386086141eedcf9efa7ccbafcbc5b85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:19:08.418875 containerd[1541]: time="2025-01-13T21:19:08.418839802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:952579084e0b77844e6e744ba63711d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a02b2ce79f68a67b494b68b81249ef8974f2bc669b61c7ef49db2f8073d6160\"" Jan 13 21:19:08.421193 containerd[1541]: time="2025-01-13T21:19:08.421153478Z" level=info msg="CreateContainer within sandbox \"0a02b2ce79f68a67b494b68b81249ef8974f2bc669b61c7ef49db2f8073d6160\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:19:08.459069 containerd[1541]: time="2025-01-13T21:19:08.459037287Z" level=info msg="CreateContainer within sandbox \"5b108b48e8411d8ea44676c246130e662bb5eabc1a8412d9a663db2ebc10b92a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2123bebd68b15a0abc4cb35357c406d0fe68c6b117673f25653f95034f9a1420\"" Jan 13 21:19:08.459622 containerd[1541]: time="2025-01-13T21:19:08.459600994Z" level=info msg="StartContainer for \"2123bebd68b15a0abc4cb35357c406d0fe68c6b117673f25653f95034f9a1420\"" Jan 13 21:19:08.460493 kubelet[2417]: W0113 21:19:08.460413 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:08.460493 kubelet[2417]: E0113 21:19:08.460472 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:08.461979 containerd[1541]: time="2025-01-13T21:19:08.461956195Z" level=info msg="CreateContainer within sandbox \"969e354872e3cedc98318cefb1bd44199386086141eedcf9efa7ccbafcbc5b85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42d4b164bc0aca0447faa1e488fe6801945c03886c6505d614d531d04b295520\"" Jan 13 21:19:08.462463 containerd[1541]: time="2025-01-13T21:19:08.462450435Z" level=info msg="StartContainer for \"42d4b164bc0aca0447faa1e488fe6801945c03886c6505d614d531d04b295520\"" Jan 13 21:19:08.464551 containerd[1541]: time="2025-01-13T21:19:08.464530194Z" level=info msg="CreateContainer within sandbox \"0a02b2ce79f68a67b494b68b81249ef8974f2bc669b61c7ef49db2f8073d6160\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6536eb28abd6b13a9c11360fe8f683041756dfd6eca6d39043e70c340735fb19\"" Jan 13 21:19:08.464890 containerd[1541]: time="2025-01-13T21:19:08.464875678Z" level=info msg="StartContainer for \"6536eb28abd6b13a9c11360fe8f683041756dfd6eca6d39043e70c340735fb19\"" Jan 13 21:19:08.485725 systemd[1]: Started cri-containerd-2123bebd68b15a0abc4cb35357c406d0fe68c6b117673f25653f95034f9a1420.scope - libcontainer container 2123bebd68b15a0abc4cb35357c406d0fe68c6b117673f25653f95034f9a1420. Jan 13 21:19:08.493533 systemd[1]: Started cri-containerd-42d4b164bc0aca0447faa1e488fe6801945c03886c6505d614d531d04b295520.scope - libcontainer container 42d4b164bc0aca0447faa1e488fe6801945c03886c6505d614d531d04b295520. Jan 13 21:19:08.497622 systemd[1]: Started cri-containerd-6536eb28abd6b13a9c11360fe8f683041756dfd6eca6d39043e70c340735fb19.scope - libcontainer container 6536eb28abd6b13a9c11360fe8f683041756dfd6eca6d39043e70c340735fb19. Jan 13 21:19:08.525384 kubelet[2417]: W0113 21:19:08.524048 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:08.525384 kubelet[2417]: E0113 21:19:08.524087 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:08.536966 containerd[1541]: time="2025-01-13T21:19:08.536941434Z" level=info msg="StartContainer for \"42d4b164bc0aca0447faa1e488fe6801945c03886c6505d614d531d04b295520\" returns successfully" Jan 13 21:19:08.550683 containerd[1541]: time="2025-01-13T21:19:08.550645334Z" level=info msg="StartContainer for \"2123bebd68b15a0abc4cb35357c406d0fe68c6b117673f25653f95034f9a1420\" returns successfully" Jan 13 21:19:08.551290 containerd[1541]: time="2025-01-13T21:19:08.551204677Z" level=info msg="StartContainer for \"6536eb28abd6b13a9c11360fe8f683041756dfd6eca6d39043e70c340735fb19\" returns successfully" Jan 13 21:19:08.563394 kubelet[2417]: E0113 21:19:08.563336 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="1.6s" Jan 13 21:19:08.617960 kubelet[2417]: W0113 21:19:08.617866 2417 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:08.617960 kubelet[2417]: E0113 21:19:08.617906 2417 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:08.667020 kubelet[2417]: I0113 21:19:08.666853 2417 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:19:08.667641 kubelet[2417]: E0113 21:19:08.667446 2417 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 13 21:19:09.260934 kubelet[2417]: E0113 21:19:09.260913 2417 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.104:6443: connect: connection refused Jan 13 21:19:10.187351 kubelet[2417]: E0113 21:19:10.187318 2417 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:19:10.268413 kubelet[2417]: I0113 21:19:10.268398 2417 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:19:10.277984 kubelet[2417]: I0113 21:19:10.277961 2417 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:19:10.282641 kubelet[2417]: E0113 21:19:10.282590 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:10.383305 kubelet[2417]: E0113 21:19:10.383279 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:10.483989 kubelet[2417]: E0113 21:19:10.483904 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:10.584039 kubelet[2417]: E0113 21:19:10.584017 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:10.684915 kubelet[2417]: E0113 21:19:10.684891 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:10.785697 kubelet[2417]: E0113 21:19:10.785618 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:10.886275 kubelet[2417]: E0113 21:19:10.886250 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:10.986951 kubelet[2417]: E0113 21:19:10.986916 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:11.088013 kubelet[2417]: E0113 21:19:11.087956 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:11.188263 kubelet[2417]: E0113 21:19:11.188233 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:11.289306 kubelet[2417]: E0113 21:19:11.289277 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:11.389914 kubelet[2417]: E0113 21:19:11.389738 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:11.490678 kubelet[2417]: E0113 21:19:11.490650 2417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:12.141172 kubelet[2417]: I0113 21:19:12.141153 2417 apiserver.go:52] "Watching apiserver" Jan 13 21:19:12.147055 systemd[1]: Reloading requested from client PID 2692 ('systemctl') (unit session-9.scope)... Jan 13 21:19:12.147078 systemd[1]: Reloading... Jan 13 21:19:12.160213 kubelet[2417]: I0113 21:19:12.160064 2417 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:19:12.199432 zram_generator::config[2729]: No configuration found. Jan 13 21:19:12.267801 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 13 21:19:12.282807 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:19:12.332579 systemd[1]: Reloading finished in 185 ms. Jan 13 21:19:12.356271 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:19:12.356411 kubelet[2417]: I0113 21:19:12.356350 2417 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:19:12.374899 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:19:12.375039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:19:12.379551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:19:12.544555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:19:12.552587 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:19:12.695846 kubelet[2797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:19:12.695846 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:19:12.695846 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:19:12.695846 kubelet[2797]: I0113 21:19:12.695686 2797 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:19:12.710379 kubelet[2797]: I0113 21:19:12.709677 2797 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:19:12.710379 kubelet[2797]: I0113 21:19:12.709689 2797 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:19:12.710379 kubelet[2797]: I0113 21:19:12.709789 2797 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:19:12.710471 kubelet[2797]: I0113 21:19:12.710460 2797 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:19:12.712190 kubelet[2797]: I0113 21:19:12.711880 2797 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:19:12.736169 kubelet[2797]: I0113 21:19:12.736149 2797 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:19:12.736308 kubelet[2797]: I0113 21:19:12.736280 2797 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:19:12.736428 kubelet[2797]: I0113 21:19:12.736308 2797 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:19:12.736490 kubelet[2797]: I0113 21:19:12.736437 2797 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:19:12.736490 kubelet[2797]: I0113 21:19:12.736454 2797 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:19:12.736490 kubelet[2797]: I0113 21:19:12.736477 2797 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:19:12.736552 kubelet[2797]: I0113 21:19:12.736539 2797 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:19:12.736552 kubelet[2797]: I0113 21:19:12.736548 2797 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:19:12.736597 kubelet[2797]: I0113 21:19:12.736562 2797 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:19:12.736597 kubelet[2797]: I0113 21:19:12.736584 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:19:12.757391 kubelet[2797]: I0113 21:19:12.757027 2797 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:19:12.757391 kubelet[2797]: I0113 21:19:12.757129 2797 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:19:12.757391 kubelet[2797]: I0113 21:19:12.757342 2797 server.go:1264] "Started kubelet" Jan 13 21:19:12.757912 kubelet[2797]: I0113 21:19:12.757888 2797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:19:12.758203 kubelet[2797]: I0113 21:19:12.758088 2797 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:19:12.758203 kubelet[2797]: I0113 21:19:12.758109 2797 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:19:12.758947 kubelet[2797]: I0113 21:19:12.758937 2797 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:19:12.759722 kubelet[2797]: I0113 21:19:12.759404 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:19:12.762713 kubelet[2797]: I0113 21:19:12.762701 2797 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:19:12.763822 kubelet[2797]: I0113 21:19:12.763627 2797 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:19:12.764114 kubelet[2797]: I0113 21:19:12.764107 2797 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:19:12.764238 kubelet[2797]: E0113 21:19:12.764228 2797 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:19:12.764663 kubelet[2797]: I0113 21:19:12.764650 2797 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:19:12.766011 kubelet[2797]: I0113 21:19:12.765998 2797 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:19:12.766011 kubelet[2797]: I0113 21:19:12.766007 2797 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:19:12.775946 kubelet[2797]: I0113 21:19:12.775880 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:19:12.776618 kubelet[2797]: I0113 21:19:12.776446 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:19:12.776618 kubelet[2797]: I0113 21:19:12.776461 2797 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:19:12.776618 kubelet[2797]: I0113 21:19:12.776473 2797 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:19:12.776618 kubelet[2797]: E0113 21:19:12.776495 2797 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:19:12.816998 kubelet[2797]: I0113 21:19:12.816983 2797 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:19:12.816998 kubelet[2797]: I0113 21:19:12.817000 2797 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:19:12.817083 kubelet[2797]: I0113 21:19:12.817012 2797 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:19:12.817103 kubelet[2797]: I0113 21:19:12.817097 2797 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:19:12.817120 kubelet[2797]: I0113 21:19:12.817103 2797 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:19:12.817120 kubelet[2797]: I0113 21:19:12.817113 2797 policy_none.go:49] "None policy: Start" Jan 13 21:19:12.817429 kubelet[2797]: I0113 21:19:12.817418 2797 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:19:12.817429 kubelet[2797]: I0113 21:19:12.817431 2797 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:19:12.817505 kubelet[2797]: I0113 21:19:12.817494 2797 state_mem.go:75] "Updated machine memory state" Jan 13 21:19:12.819949 kubelet[2797]: I0113 21:19:12.819936 2797 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:19:12.820039 kubelet[2797]: I0113 21:19:12.820019 2797 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:19:12.820194 kubelet[2797]: I0113 21:19:12.820071 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:19:12.864219 kubelet[2797]: I0113 21:19:12.864200 2797 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:19:12.876919 kubelet[2797]: I0113 21:19:12.876884 2797 topology_manager.go:215] "Topology Admit Handler" podUID="952579084e0b77844e6e744ba63711d7" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:19:12.877006 kubelet[2797]: I0113 21:19:12.876951 2797 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:19:12.877006 kubelet[2797]: I0113 21:19:12.876991 2797 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:19:12.900186 kubelet[2797]: I0113 21:19:12.900067 2797 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:19:12.900186 kubelet[2797]: I0113 21:19:12.900144 2797 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:19:13.065345 kubelet[2797]: I0113 21:19:13.065290 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:13.065345 kubelet[2797]: I0113 21:19:13.065344 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:13.065506 kubelet[2797]: I0113 21:19:13.065356 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:13.065506 kubelet[2797]: I0113 21:19:13.065386 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/952579084e0b77844e6e744ba63711d7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"952579084e0b77844e6e744ba63711d7\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:13.065506 kubelet[2797]: I0113 21:19:13.065397 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/952579084e0b77844e6e744ba63711d7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"952579084e0b77844e6e744ba63711d7\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:13.065506 kubelet[2797]: I0113 21:19:13.065406 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/952579084e0b77844e6e744ba63711d7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"952579084e0b77844e6e744ba63711d7\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:13.065506 kubelet[2797]: I0113 21:19:13.065414 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:13.065587 kubelet[2797]: I0113 21:19:13.065422 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:13.065587 kubelet[2797]: I0113 21:19:13.065453 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:19:13.739794 kubelet[2797]: I0113 21:19:13.739649 2797 apiserver.go:52] "Watching apiserver" Jan 13 21:19:13.766469 kubelet[2797]: I0113 21:19:13.766445 2797 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:19:13.795380 kubelet[2797]: E0113 21:19:13.794700 2797 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:13.854213 kubelet[2797]: I0113 21:19:13.853993 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.853979091 podStartE2EDuration="1.853979091s" podCreationTimestamp="2025-01-13 21:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:13.832403155 +0000 UTC m=+1.168008926" watchObservedRunningTime="2025-01-13 21:19:13.853979091 +0000 UTC m=+1.189584857" Jan 13 21:19:13.863099 kubelet[2797]: I0113 21:19:13.863025 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.863013464 podStartE2EDuration="1.863013464s" podCreationTimestamp="2025-01-13 21:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:13.854389379 +0000 UTC m=+1.189995156" watchObservedRunningTime="2025-01-13 21:19:13.863013464 +0000 UTC m=+1.198619235" Jan 13 21:19:13.891977 kubelet[2797]: I0113 21:19:13.891943 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.891933095 podStartE2EDuration="1.891933095s" podCreationTimestamp="2025-01-13 21:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:13.885579551 +0000 UTC m=+1.221185323" watchObservedRunningTime="2025-01-13 21:19:13.891933095 +0000 UTC m=+1.227538864" Jan 13 21:19:16.963385 sudo[1851]: pam_unix(sudo:session): session closed for user root Jan 13 21:19:16.964613 sshd[1848]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:16.966941 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:19:16.967216 systemd[1]: sshd@6-139.178.70.104:22-139.178.68.195:40418.service: Deactivated successfully. Jan 13 21:19:16.968456 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:19:16.968657 systemd[1]: session-9.scope: Consumed 2.852s CPU time, 190.3M memory peak, 0B memory swap peak. Jan 13 21:19:16.969317 systemd-logind[1522]: Removed session 9. Jan 13 21:19:27.375213 kubelet[2797]: I0113 21:19:27.375184 2797 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:19:27.390479 containerd[1541]: time="2025-01-13T21:19:27.390437902Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:19:27.390829 kubelet[2797]: I0113 21:19:27.390656 2797 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:19:27.637528 kubelet[2797]: I0113 21:19:27.637191 2797 topology_manager.go:215] "Topology Admit Handler" podUID="4680df0a-e9b7-4c78-bad1-c2aeb1e1909e" podNamespace="kube-system" podName="kube-proxy-zlqcf" Jan 13 21:19:27.645092 systemd[1]: Created slice kubepods-besteffort-pod4680df0a_e9b7_4c78_bad1_c2aeb1e1909e.slice - libcontainer container kubepods-besteffort-pod4680df0a_e9b7_4c78_bad1_c2aeb1e1909e.slice. Jan 13 21:19:27.771179 kubelet[2797]: I0113 21:19:27.771133 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x94f4\" (UniqueName: \"kubernetes.io/projected/4680df0a-e9b7-4c78-bad1-c2aeb1e1909e-kube-api-access-x94f4\") pod \"kube-proxy-zlqcf\" (UID: \"4680df0a-e9b7-4c78-bad1-c2aeb1e1909e\") " pod="kube-system/kube-proxy-zlqcf" Jan 13 21:19:27.771266 kubelet[2797]: I0113 21:19:27.771201 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4680df0a-e9b7-4c78-bad1-c2aeb1e1909e-kube-proxy\") pod \"kube-proxy-zlqcf\" (UID: \"4680df0a-e9b7-4c78-bad1-c2aeb1e1909e\") " pod="kube-system/kube-proxy-zlqcf" Jan 13 21:19:27.771266 kubelet[2797]: I0113 21:19:27.771219 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4680df0a-e9b7-4c78-bad1-c2aeb1e1909e-xtables-lock\") pod \"kube-proxy-zlqcf\" (UID: \"4680df0a-e9b7-4c78-bad1-c2aeb1e1909e\") " pod="kube-system/kube-proxy-zlqcf" Jan 13 21:19:27.771266 kubelet[2797]: I0113 21:19:27.771257 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4680df0a-e9b7-4c78-bad1-c2aeb1e1909e-lib-modules\") pod \"kube-proxy-zlqcf\" (UID: \"4680df0a-e9b7-4c78-bad1-c2aeb1e1909e\") " pod="kube-system/kube-proxy-zlqcf" Jan 13 21:19:27.773265 kubelet[2797]: I0113 21:19:27.773082 2797 topology_manager.go:215] "Topology Admit Handler" podUID="66579a5a-9689-4504-bbb1-392048f00a4b" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-n8msn" Jan 13 21:19:27.778438 systemd[1]: Created slice kubepods-besteffort-pod66579a5a_9689_4504_bbb1_392048f00a4b.slice - libcontainer container kubepods-besteffort-pod66579a5a_9689_4504_bbb1_392048f00a4b.slice. Jan 13 21:19:27.960023 containerd[1541]: time="2025-01-13T21:19:27.959951969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlqcf,Uid:4680df0a-e9b7-4c78-bad1-c2aeb1e1909e,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:27.972449 kubelet[2797]: I0113 21:19:27.972428 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sht9\" (UniqueName: \"kubernetes.io/projected/66579a5a-9689-4504-bbb1-392048f00a4b-kube-api-access-6sht9\") pod \"tigera-operator-7bc55997bb-n8msn\" (UID: \"66579a5a-9689-4504-bbb1-392048f00a4b\") " pod="tigera-operator/tigera-operator-7bc55997bb-n8msn" Jan 13 21:19:27.972510 kubelet[2797]: I0113 21:19:27.972453 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/66579a5a-9689-4504-bbb1-392048f00a4b-var-lib-calico\") pod \"tigera-operator-7bc55997bb-n8msn\" (UID: \"66579a5a-9689-4504-bbb1-392048f00a4b\") " pod="tigera-operator/tigera-operator-7bc55997bb-n8msn" Jan 13 21:19:28.011527 containerd[1541]: time="2025-01-13T21:19:28.011439395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:28.011527 containerd[1541]: time="2025-01-13T21:19:28.011474241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:28.011527 containerd[1541]: time="2025-01-13T21:19:28.011503480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:28.011700 containerd[1541]: time="2025-01-13T21:19:28.011566101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:28.024471 systemd[1]: Started cri-containerd-b9b08db3cd97b84d343dd9abd2ebc92e109730d89704e20a111f23e30d31b6b7.scope - libcontainer container b9b08db3cd97b84d343dd9abd2ebc92e109730d89704e20a111f23e30d31b6b7. Jan 13 21:19:28.037637 containerd[1541]: time="2025-01-13T21:19:28.037521831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlqcf,Uid:4680df0a-e9b7-4c78-bad1-c2aeb1e1909e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9b08db3cd97b84d343dd9abd2ebc92e109730d89704e20a111f23e30d31b6b7\"" Jan 13 21:19:28.039781 containerd[1541]: time="2025-01-13T21:19:28.039650478Z" level=info msg="CreateContainer within sandbox \"b9b08db3cd97b84d343dd9abd2ebc92e109730d89704e20a111f23e30d31b6b7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:19:28.046909 containerd[1541]: time="2025-01-13T21:19:28.046828332Z" level=info msg="CreateContainer within sandbox \"b9b08db3cd97b84d343dd9abd2ebc92e109730d89704e20a111f23e30d31b6b7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ccee0543c818c0c0e7a7d8a565be6b7fa2ce4fb5d5cce77fa6ff883fb57131c5\"" Jan 13 21:19:28.047631 containerd[1541]: time="2025-01-13T21:19:28.047512259Z" level=info msg="StartContainer for \"ccee0543c818c0c0e7a7d8a565be6b7fa2ce4fb5d5cce77fa6ff883fb57131c5\"" Jan 13 21:19:28.066504 systemd[1]: Started cri-containerd-ccee0543c818c0c0e7a7d8a565be6b7fa2ce4fb5d5cce77fa6ff883fb57131c5.scope - libcontainer container ccee0543c818c0c0e7a7d8a565be6b7fa2ce4fb5d5cce77fa6ff883fb57131c5. Jan 13 21:19:28.081687 containerd[1541]: time="2025-01-13T21:19:28.081614976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-n8msn,Uid:66579a5a-9689-4504-bbb1-392048f00a4b,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:19:28.085327 containerd[1541]: time="2025-01-13T21:19:28.085309017Z" level=info msg="StartContainer for \"ccee0543c818c0c0e7a7d8a565be6b7fa2ce4fb5d5cce77fa6ff883fb57131c5\" returns successfully" Jan 13 21:19:28.095512 containerd[1541]: time="2025-01-13T21:19:28.095455764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:28.095928 containerd[1541]: time="2025-01-13T21:19:28.095683500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:28.096013 containerd[1541]: time="2025-01-13T21:19:28.095983520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:28.096070 containerd[1541]: time="2025-01-13T21:19:28.096056838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:28.107451 systemd[1]: Started cri-containerd-99a2792477755b3e058ca86788184ae2efc991ea3fc54b1403fc755b549b7fbe.scope - libcontainer container 99a2792477755b3e058ca86788184ae2efc991ea3fc54b1403fc755b549b7fbe. Jan 13 21:19:28.133277 containerd[1541]: time="2025-01-13T21:19:28.133252671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-n8msn,Uid:66579a5a-9689-4504-bbb1-392048f00a4b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"99a2792477755b3e058ca86788184ae2efc991ea3fc54b1403fc755b549b7fbe\"" Jan 13 21:19:28.135085 containerd[1541]: time="2025-01-13T21:19:28.134862134Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:19:28.863337 kubelet[2797]: I0113 21:19:28.863109 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zlqcf" podStartSLOduration=1.863094713 podStartE2EDuration="1.863094713s" podCreationTimestamp="2025-01-13 21:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:28.862814884 +0000 UTC m=+16.198420663" watchObservedRunningTime="2025-01-13 21:19:28.863094713 +0000 UTC m=+16.198700491" Jan 13 21:19:28.882713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount322354220.mount: Deactivated successfully. Jan 13 21:19:29.688003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590568849.mount: Deactivated successfully. Jan 13 21:19:29.999685 containerd[1541]: time="2025-01-13T21:19:29.999609345Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:30.000225 containerd[1541]: time="2025-01-13T21:19:30.000141915Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764313" Jan 13 21:19:30.000540 containerd[1541]: time="2025-01-13T21:19:30.000523083Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:30.001755 containerd[1541]: time="2025-01-13T21:19:30.001726456Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:30.002485 containerd[1541]: time="2025-01-13T21:19:30.002172296Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.867293272s" Jan 13 21:19:30.002485 containerd[1541]: time="2025-01-13T21:19:30.002189486Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:19:30.005677 containerd[1541]: time="2025-01-13T21:19:30.005639750Z" level=info msg="CreateContainer within sandbox \"99a2792477755b3e058ca86788184ae2efc991ea3fc54b1403fc755b549b7fbe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:19:30.011559 containerd[1541]: time="2025-01-13T21:19:30.011497204Z" level=info msg="CreateContainer within sandbox \"99a2792477755b3e058ca86788184ae2efc991ea3fc54b1403fc755b549b7fbe\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e7cd8ad081b1abd5327cb59f46f5ca7416fe646d4535b3497d2738619608a60f\"" Jan 13 21:19:30.012060 containerd[1541]: time="2025-01-13T21:19:30.011738659Z" level=info msg="StartContainer for \"e7cd8ad081b1abd5327cb59f46f5ca7416fe646d4535b3497d2738619608a60f\"" Jan 13 21:19:30.070454 systemd[1]: Started cri-containerd-e7cd8ad081b1abd5327cb59f46f5ca7416fe646d4535b3497d2738619608a60f.scope - libcontainer container e7cd8ad081b1abd5327cb59f46f5ca7416fe646d4535b3497d2738619608a60f. Jan 13 21:19:30.085924 containerd[1541]: time="2025-01-13T21:19:30.085904832Z" level=info msg="StartContainer for \"e7cd8ad081b1abd5327cb59f46f5ca7416fe646d4535b3497d2738619608a60f\" returns successfully" Jan 13 21:19:30.864285 kubelet[2797]: I0113 21:19:30.864237 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-n8msn" podStartSLOduration=1.9934042010000002 podStartE2EDuration="3.864226551s" podCreationTimestamp="2025-01-13 21:19:27 +0000 UTC" firstStartedPulling="2025-01-13 21:19:28.134044186 +0000 UTC m=+15.469649954" lastFinishedPulling="2025-01-13 21:19:30.004866536 +0000 UTC m=+17.340472304" observedRunningTime="2025-01-13 21:19:30.864039262 +0000 UTC m=+18.199645030" watchObservedRunningTime="2025-01-13 21:19:30.864226551 +0000 UTC m=+18.199832328" Jan 13 21:19:32.879822 kubelet[2797]: I0113 21:19:32.879735 2797 topology_manager.go:215] "Topology Admit Handler" podUID="65cb099b-4762-4a07-8346-3ff2f2573016" podNamespace="calico-system" podName="calico-typha-9c848f6bf-l4k7f" Jan 13 21:19:32.900215 systemd[1]: Created slice kubepods-besteffort-pod65cb099b_4762_4a07_8346_3ff2f2573016.slice - libcontainer container kubepods-besteffort-pod65cb099b_4762_4a07_8346_3ff2f2573016.slice. Jan 13 21:19:32.924063 kubelet[2797]: I0113 21:19:32.923954 2797 topology_manager.go:215] "Topology Admit Handler" podUID="0292e525-66fe-458f-9c05-062d9caebc7a" podNamespace="calico-system" podName="calico-node-6574n" Jan 13 21:19:32.934072 systemd[1]: Created slice kubepods-besteffort-pod0292e525_66fe_458f_9c05_062d9caebc7a.slice - libcontainer container kubepods-besteffort-pod0292e525_66fe_458f_9c05_062d9caebc7a.slice. Jan 13 21:19:32.994773 kubelet[2797]: I0113 21:19:32.994492 2797 topology_manager.go:215] "Topology Admit Handler" podUID="67967985-cb3a-4c06-87e4-f05e417d0670" podNamespace="calico-system" podName="csi-node-driver-459cr" Jan 13 21:19:32.997168 kubelet[2797]: E0113 21:19:32.996991 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-459cr" podUID="67967985-cb3a-4c06-87e4-f05e417d0670" Jan 13 21:19:33.007580 kubelet[2797]: I0113 21:19:33.007484 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65cb099b-4762-4a07-8346-3ff2f2573016-typha-certs\") pod \"calico-typha-9c848f6bf-l4k7f\" (UID: \"65cb099b-4762-4a07-8346-3ff2f2573016\") " pod="calico-system/calico-typha-9c848f6bf-l4k7f" Jan 13 21:19:33.007580 kubelet[2797]: I0113 21:19:33.007514 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65cb099b-4762-4a07-8346-3ff2f2573016-tigera-ca-bundle\") pod \"calico-typha-9c848f6bf-l4k7f\" (UID: \"65cb099b-4762-4a07-8346-3ff2f2573016\") " pod="calico-system/calico-typha-9c848f6bf-l4k7f" Jan 13 21:19:33.007580 kubelet[2797]: I0113 21:19:33.007528 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcfz6\" (UniqueName: \"kubernetes.io/projected/65cb099b-4762-4a07-8346-3ff2f2573016-kube-api-access-wcfz6\") pod \"calico-typha-9c848f6bf-l4k7f\" (UID: \"65cb099b-4762-4a07-8346-3ff2f2573016\") " pod="calico-system/calico-typha-9c848f6bf-l4k7f" Jan 13 21:19:33.108293 kubelet[2797]: I0113 21:19:33.107975 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0292e525-66fe-458f-9c05-062d9caebc7a-flexvol-driver-host\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108293 kubelet[2797]: I0113 21:19:33.108012 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0292e525-66fe-458f-9c05-062d9caebc7a-lib-modules\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108293 kubelet[2797]: I0113 21:19:33.108025 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twwcw\" (UniqueName: \"kubernetes.io/projected/67967985-cb3a-4c06-87e4-f05e417d0670-kube-api-access-twwcw\") pod \"csi-node-driver-459cr\" (UID: \"67967985-cb3a-4c06-87e4-f05e417d0670\") " pod="calico-system/csi-node-driver-459cr" Jan 13 21:19:33.108293 kubelet[2797]: I0113 21:19:33.108037 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0292e525-66fe-458f-9c05-062d9caebc7a-cni-bin-dir\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108293 kubelet[2797]: I0113 21:19:33.108047 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67967985-cb3a-4c06-87e4-f05e417d0670-kubelet-dir\") pod \"csi-node-driver-459cr\" (UID: \"67967985-cb3a-4c06-87e4-f05e417d0670\") " pod="calico-system/csi-node-driver-459cr" Jan 13 21:19:33.108467 kubelet[2797]: I0113 21:19:33.108055 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0292e525-66fe-458f-9c05-062d9caebc7a-cni-log-dir\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108467 kubelet[2797]: I0113 21:19:33.108064 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/67967985-cb3a-4c06-87e4-f05e417d0670-socket-dir\") pod \"csi-node-driver-459cr\" (UID: \"67967985-cb3a-4c06-87e4-f05e417d0670\") " pod="calico-system/csi-node-driver-459cr" Jan 13 21:19:33.108467 kubelet[2797]: I0113 21:19:33.108073 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0292e525-66fe-458f-9c05-062d9caebc7a-policysync\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108467 kubelet[2797]: I0113 21:19:33.108081 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0292e525-66fe-458f-9c05-062d9caebc7a-tigera-ca-bundle\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108467 kubelet[2797]: I0113 21:19:33.108090 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0292e525-66fe-458f-9c05-062d9caebc7a-var-lib-calico\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108547 kubelet[2797]: I0113 21:19:33.108099 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0292e525-66fe-458f-9c05-062d9caebc7a-xtables-lock\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108547 kubelet[2797]: I0113 21:19:33.108108 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0292e525-66fe-458f-9c05-062d9caebc7a-var-run-calico\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108547 kubelet[2797]: I0113 21:19:33.108118 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfhk9\" (UniqueName: \"kubernetes.io/projected/0292e525-66fe-458f-9c05-062d9caebc7a-kube-api-access-pfhk9\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108547 kubelet[2797]: I0113 21:19:33.108127 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/67967985-cb3a-4c06-87e4-f05e417d0670-registration-dir\") pod \"csi-node-driver-459cr\" (UID: \"67967985-cb3a-4c06-87e4-f05e417d0670\") " pod="calico-system/csi-node-driver-459cr" Jan 13 21:19:33.108547 kubelet[2797]: I0113 21:19:33.108135 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0292e525-66fe-458f-9c05-062d9caebc7a-node-certs\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108628 kubelet[2797]: I0113 21:19:33.108149 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0292e525-66fe-458f-9c05-062d9caebc7a-cni-net-dir\") pod \"calico-node-6574n\" (UID: \"0292e525-66fe-458f-9c05-062d9caebc7a\") " pod="calico-system/calico-node-6574n" Jan 13 21:19:33.108628 kubelet[2797]: I0113 21:19:33.108159 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/67967985-cb3a-4c06-87e4-f05e417d0670-varrun\") pod \"csi-node-driver-459cr\" (UID: \"67967985-cb3a-4c06-87e4-f05e417d0670\") " pod="calico-system/csi-node-driver-459cr" Jan 13 21:19:33.216412 kubelet[2797]: E0113 21:19:33.215636 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:19:33.216412 kubelet[2797]: W0113 21:19:33.215651 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:19:33.216412 kubelet[2797]: E0113 21:19:33.215665 2797 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:19:33.221849 kubelet[2797]: E0113 21:19:33.219719 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:19:33.221849 kubelet[2797]: W0113 21:19:33.219731 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:19:33.221849 kubelet[2797]: E0113 21:19:33.219743 2797 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:19:33.221849 kubelet[2797]: E0113 21:19:33.220111 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:19:33.221849 kubelet[2797]: W0113 21:19:33.220117 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:19:33.221849 kubelet[2797]: E0113 21:19:33.220123 2797 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:19:33.222321 kubelet[2797]: E0113 21:19:33.222286 2797 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:19:33.222321 kubelet[2797]: W0113 21:19:33.222294 2797 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:19:33.222321 kubelet[2797]: E0113 21:19:33.222303 2797 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:19:33.273792 containerd[1541]: time="2025-01-13T21:19:33.273762029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9c848f6bf-l4k7f,Uid:65cb099b-4762-4a07-8346-3ff2f2573016,Namespace:calico-system,Attempt:0,}" Jan 13 21:19:33.291449 containerd[1541]: time="2025-01-13T21:19:33.291398954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6574n,Uid:0292e525-66fe-458f-9c05-062d9caebc7a,Namespace:calico-system,Attempt:0,}" Jan 13 21:19:33.381407 containerd[1541]: time="2025-01-13T21:19:33.381213374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:33.381407 containerd[1541]: time="2025-01-13T21:19:33.381254786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:33.381407 containerd[1541]: time="2025-01-13T21:19:33.381264711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:33.381407 containerd[1541]: time="2025-01-13T21:19:33.381329929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:33.390690 containerd[1541]: time="2025-01-13T21:19:33.390027696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:33.390690 containerd[1541]: time="2025-01-13T21:19:33.390289070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:33.390690 containerd[1541]: time="2025-01-13T21:19:33.390300873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:33.390690 containerd[1541]: time="2025-01-13T21:19:33.390358753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:33.422462 systemd[1]: Started cri-containerd-365077a57e193cbbcd1e626367dbc1fe3fd00ff3cad14e84cdb54822726aad48.scope - libcontainer container 365077a57e193cbbcd1e626367dbc1fe3fd00ff3cad14e84cdb54822726aad48. Jan 13 21:19:33.425402 systemd[1]: Started cri-containerd-b5cc9564a98b8f479fe8cb12e4deda2cd8113b636ca75b88294b34f8f24d073e.scope - libcontainer container b5cc9564a98b8f479fe8cb12e4deda2cd8113b636ca75b88294b34f8f24d073e. Jan 13 21:19:33.445205 containerd[1541]: time="2025-01-13T21:19:33.445178433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6574n,Uid:0292e525-66fe-458f-9c05-062d9caebc7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5cc9564a98b8f479fe8cb12e4deda2cd8113b636ca75b88294b34f8f24d073e\"" Jan 13 21:19:33.459755 containerd[1541]: time="2025-01-13T21:19:33.459694092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9c848f6bf-l4k7f,Uid:65cb099b-4762-4a07-8346-3ff2f2573016,Namespace:calico-system,Attempt:0,} returns sandbox id \"365077a57e193cbbcd1e626367dbc1fe3fd00ff3cad14e84cdb54822726aad48\"" Jan 13 21:19:33.482337 containerd[1541]: time="2025-01-13T21:19:33.482271226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:19:34.799685 kubelet[2797]: E0113 21:19:34.799656 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-459cr" podUID="67967985-cb3a-4c06-87e4-f05e417d0670" Jan 13 21:19:34.848830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566913818.mount: Deactivated successfully. Jan 13 21:19:34.908004 containerd[1541]: time="2025-01-13T21:19:34.907978750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:34.908746 containerd[1541]: time="2025-01-13T21:19:34.908703658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:19:34.909086 containerd[1541]: time="2025-01-13T21:19:34.909071258Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:34.910438 containerd[1541]: time="2025-01-13T21:19:34.910320861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:34.911891 containerd[1541]: time="2025-01-13T21:19:34.911872368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.429372015s" Jan 13 21:19:34.911924 containerd[1541]: time="2025-01-13T21:19:34.911892988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:19:34.913313 containerd[1541]: time="2025-01-13T21:19:34.912725390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:19:34.919701 containerd[1541]: time="2025-01-13T21:19:34.919677311Z" level=info msg="CreateContainer within sandbox \"b5cc9564a98b8f479fe8cb12e4deda2cd8113b636ca75b88294b34f8f24d073e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:19:34.925618 containerd[1541]: time="2025-01-13T21:19:34.925583524Z" level=info msg="CreateContainer within sandbox \"b5cc9564a98b8f479fe8cb12e4deda2cd8113b636ca75b88294b34f8f24d073e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"259ae34575a2bbd58dc0785f8176ab6cb2017fae39a2f1df1b31a9701013984c\"" Jan 13 21:19:34.926524 containerd[1541]: time="2025-01-13T21:19:34.925918717Z" level=info msg="StartContainer for \"259ae34575a2bbd58dc0785f8176ab6cb2017fae39a2f1df1b31a9701013984c\"" Jan 13 21:19:34.947460 systemd[1]: Started cri-containerd-259ae34575a2bbd58dc0785f8176ab6cb2017fae39a2f1df1b31a9701013984c.scope - libcontainer container 259ae34575a2bbd58dc0785f8176ab6cb2017fae39a2f1df1b31a9701013984c. Jan 13 21:19:34.966697 containerd[1541]: time="2025-01-13T21:19:34.966673649Z" level=info msg="StartContainer for \"259ae34575a2bbd58dc0785f8176ab6cb2017fae39a2f1df1b31a9701013984c\" returns successfully" Jan 13 21:19:34.972162 systemd[1]: cri-containerd-259ae34575a2bbd58dc0785f8176ab6cb2017fae39a2f1df1b31a9701013984c.scope: Deactivated successfully. Jan 13 21:19:35.402744 containerd[1541]: time="2025-01-13T21:19:35.394586135Z" level=info msg="shim disconnected" id=259ae34575a2bbd58dc0785f8176ab6cb2017fae39a2f1df1b31a9701013984c namespace=k8s.io Jan 13 21:19:35.402883 containerd[1541]: time="2025-01-13T21:19:35.402745445Z" level=warning msg="cleaning up after shim disconnected" id=259ae34575a2bbd58dc0785f8176ab6cb2017fae39a2f1df1b31a9701013984c namespace=k8s.io Jan 13 21:19:35.402883 containerd[1541]: time="2025-01-13T21:19:35.402757369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:19:36.598003 containerd[1541]: time="2025-01-13T21:19:36.597948840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:36.598838 containerd[1541]: time="2025-01-13T21:19:36.598812256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 21:19:36.599391 containerd[1541]: time="2025-01-13T21:19:36.599127388Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:36.600074 containerd[1541]: time="2025-01-13T21:19:36.600059909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:36.600526 containerd[1541]: time="2025-01-13T21:19:36.600514092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.687773894s" Jan 13 21:19:36.600574 containerd[1541]: time="2025-01-13T21:19:36.600565637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:19:36.600951 containerd[1541]: time="2025-01-13T21:19:36.600936804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:19:36.608632 containerd[1541]: time="2025-01-13T21:19:36.608614361Z" level=info msg="CreateContainer within sandbox \"365077a57e193cbbcd1e626367dbc1fe3fd00ff3cad14e84cdb54822726aad48\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:19:36.613818 containerd[1541]: time="2025-01-13T21:19:36.613800815Z" level=info msg="CreateContainer within sandbox \"365077a57e193cbbcd1e626367dbc1fe3fd00ff3cad14e84cdb54822726aad48\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e6c82576f013d307d7eb1263017cdbd62738ae8fb66dc54a2622c10bdcf3b76d\"" Jan 13 21:19:36.614397 containerd[1541]: time="2025-01-13T21:19:36.614191207Z" level=info msg="StartContainer for \"e6c82576f013d307d7eb1263017cdbd62738ae8fb66dc54a2622c10bdcf3b76d\"" Jan 13 21:19:36.633444 systemd[1]: Started cri-containerd-e6c82576f013d307d7eb1263017cdbd62738ae8fb66dc54a2622c10bdcf3b76d.scope - libcontainer container e6c82576f013d307d7eb1263017cdbd62738ae8fb66dc54a2622c10bdcf3b76d. Jan 13 21:19:36.660973 containerd[1541]: time="2025-01-13T21:19:36.660950767Z" level=info msg="StartContainer for \"e6c82576f013d307d7eb1263017cdbd62738ae8fb66dc54a2622c10bdcf3b76d\" returns successfully" Jan 13 21:19:36.777308 kubelet[2797]: E0113 21:19:36.777276 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-459cr" podUID="67967985-cb3a-4c06-87e4-f05e417d0670" Jan 13 21:19:36.879569 kubelet[2797]: I0113 21:19:36.877756 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9c848f6bf-l4k7f" podStartSLOduration=1.759334728 podStartE2EDuration="4.877743117s" podCreationTimestamp="2025-01-13 21:19:32 +0000 UTC" firstStartedPulling="2025-01-13 21:19:33.482460452 +0000 UTC m=+20.818066219" lastFinishedPulling="2025-01-13 21:19:36.600868841 +0000 UTC m=+23.936474608" observedRunningTime="2025-01-13 21:19:36.877107148 +0000 UTC m=+24.212712927" watchObservedRunningTime="2025-01-13 21:19:36.877743117 +0000 UTC m=+24.213348903" Jan 13 21:19:37.604112 systemd[1]: run-containerd-runc-k8s.io-e6c82576f013d307d7eb1263017cdbd62738ae8fb66dc54a2622c10bdcf3b76d-runc.cXXekn.mount: Deactivated successfully. Jan 13 21:19:37.875422 kubelet[2797]: I0113 21:19:37.875327 2797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:19:38.777647 kubelet[2797]: E0113 21:19:38.777624 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-459cr" podUID="67967985-cb3a-4c06-87e4-f05e417d0670" Jan 13 21:19:39.598479 containerd[1541]: time="2025-01-13T21:19:39.598429936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:39.599038 containerd[1541]: time="2025-01-13T21:19:39.598987096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:19:39.600005 containerd[1541]: time="2025-01-13T21:19:39.599308533Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:39.600912 containerd[1541]: time="2025-01-13T21:19:39.600876551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:39.601836 containerd[1541]: time="2025-01-13T21:19:39.601460272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.000504777s" Jan 13 21:19:39.601836 containerd[1541]: time="2025-01-13T21:19:39.601484054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:19:39.604025 containerd[1541]: time="2025-01-13T21:19:39.603941759Z" level=info msg="CreateContainer within sandbox \"b5cc9564a98b8f479fe8cb12e4deda2cd8113b636ca75b88294b34f8f24d073e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:19:39.631469 containerd[1541]: time="2025-01-13T21:19:39.631420155Z" level=info msg="CreateContainer within sandbox \"b5cc9564a98b8f479fe8cb12e4deda2cd8113b636ca75b88294b34f8f24d073e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db9171b54d3b11f1095307ca2fa4c67fa31a8b7e86edd0ee3efa91a4b7b3ec22\"" Jan 13 21:19:39.631806 containerd[1541]: time="2025-01-13T21:19:39.631788411Z" level=info msg="StartContainer for \"db9171b54d3b11f1095307ca2fa4c67fa31a8b7e86edd0ee3efa91a4b7b3ec22\"" Jan 13 21:19:39.666584 systemd[1]: Started cri-containerd-db9171b54d3b11f1095307ca2fa4c67fa31a8b7e86edd0ee3efa91a4b7b3ec22.scope - libcontainer container db9171b54d3b11f1095307ca2fa4c67fa31a8b7e86edd0ee3efa91a4b7b3ec22. Jan 13 21:19:39.681190 containerd[1541]: time="2025-01-13T21:19:39.681126439Z" level=info msg="StartContainer for \"db9171b54d3b11f1095307ca2fa4c67fa31a8b7e86edd0ee3efa91a4b7b3ec22\" returns successfully" Jan 13 21:19:40.777945 kubelet[2797]: E0113 21:19:40.777587 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-459cr" podUID="67967985-cb3a-4c06-87e4-f05e417d0670" Jan 13 21:19:40.858737 systemd[1]: cri-containerd-db9171b54d3b11f1095307ca2fa4c67fa31a8b7e86edd0ee3efa91a4b7b3ec22.scope: Deactivated successfully. Jan 13 21:19:40.882964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db9171b54d3b11f1095307ca2fa4c67fa31a8b7e86edd0ee3efa91a4b7b3ec22-rootfs.mount: Deactivated successfully. Jan 13 21:19:40.918227 containerd[1541]: time="2025-01-13T21:19:40.918188875Z" level=info msg="shim disconnected" id=db9171b54d3b11f1095307ca2fa4c67fa31a8b7e86edd0ee3efa91a4b7b3ec22 namespace=k8s.io Jan 13 21:19:40.918227 containerd[1541]: time="2025-01-13T21:19:40.918228462Z" level=warning msg="cleaning up after shim disconnected" id=db9171b54d3b11f1095307ca2fa4c67fa31a8b7e86edd0ee3efa91a4b7b3ec22 namespace=k8s.io Jan 13 21:19:40.918528 containerd[1541]: time="2025-01-13T21:19:40.918234548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:19:40.920912 kubelet[2797]: I0113 21:19:40.920573 2797 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:19:40.943787 kubelet[2797]: I0113 21:19:40.943759 2797 topology_manager.go:215] "Topology Admit Handler" podUID="6d09daa5-3680-44fd-88d8-a2a333e97e31" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mc7ng" Jan 13 21:19:40.944270 kubelet[2797]: I0113 21:19:40.944255 2797 topology_manager.go:215] "Topology Admit Handler" podUID="86002476-5cad-437c-a040-0e9e7c4b4ce5" podNamespace="calico-system" podName="calico-kube-controllers-8d9dc5c7-xp85p" Jan 13 21:19:40.948438 kubelet[2797]: I0113 21:19:40.948410 2797 topology_manager.go:215] "Topology Admit Handler" podUID="4094d46d-3bd9-4482-9e42-9463597eb69a" podNamespace="calico-apiserver" podName="calico-apiserver-dd958b664-7knlx" Jan 13 21:19:40.948631 kubelet[2797]: I0113 21:19:40.948618 2797 topology_manager.go:215] "Topology Admit Handler" podUID="c9b19419-34c4-46dc-abe6-e6b8c9ad67ed" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c5tjn" Jan 13 21:19:40.954754 kubelet[2797]: I0113 21:19:40.954420 2797 topology_manager.go:215] "Topology Admit Handler" podUID="001785fd-8c63-40ac-bd0e-8041100fbfa9" podNamespace="calico-apiserver" podName="calico-apiserver-dd958b664-5xkxk" Jan 13 21:19:40.964234 systemd[1]: Created slice kubepods-burstable-pod6d09daa5_3680_44fd_88d8_a2a333e97e31.slice - libcontainer container kubepods-burstable-pod6d09daa5_3680_44fd_88d8_a2a333e97e31.slice. Jan 13 21:19:40.970623 systemd[1]: Created slice kubepods-besteffort-pod86002476_5cad_437c_a040_0e9e7c4b4ce5.slice - libcontainer container kubepods-besteffort-pod86002476_5cad_437c_a040_0e9e7c4b4ce5.slice. Jan 13 21:19:40.975083 systemd[1]: Created slice kubepods-burstable-podc9b19419_34c4_46dc_abe6_e6b8c9ad67ed.slice - libcontainer container kubepods-burstable-podc9b19419_34c4_46dc_abe6_e6b8c9ad67ed.slice. Jan 13 21:19:40.982928 systemd[1]: Created slice kubepods-besteffort-pod4094d46d_3bd9_4482_9e42_9463597eb69a.slice - libcontainer container kubepods-besteffort-pod4094d46d_3bd9_4482_9e42_9463597eb69a.slice. Jan 13 21:19:40.987182 systemd[1]: Created slice kubepods-besteffort-pod001785fd_8c63_40ac_bd0e_8041100fbfa9.slice - libcontainer container kubepods-besteffort-pod001785fd_8c63_40ac_bd0e_8041100fbfa9.slice. Jan 13 21:19:41.060495 kubelet[2797]: I0113 21:19:41.060412 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d09daa5-3680-44fd-88d8-a2a333e97e31-config-volume\") pod \"coredns-7db6d8ff4d-mc7ng\" (UID: \"6d09daa5-3680-44fd-88d8-a2a333e97e31\") " pod="kube-system/coredns-7db6d8ff4d-mc7ng" Jan 13 21:19:41.060495 kubelet[2797]: I0113 21:19:41.060446 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr28b\" (UniqueName: \"kubernetes.io/projected/86002476-5cad-437c-a040-0e9e7c4b4ce5-kube-api-access-lr28b\") pod \"calico-kube-controllers-8d9dc5c7-xp85p\" (UID: \"86002476-5cad-437c-a040-0e9e7c4b4ce5\") " pod="calico-system/calico-kube-controllers-8d9dc5c7-xp85p" Jan 13 21:19:41.060495 kubelet[2797]: I0113 21:19:41.060463 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wc6\" (UniqueName: \"kubernetes.io/projected/6d09daa5-3680-44fd-88d8-a2a333e97e31-kube-api-access-w4wc6\") pod \"coredns-7db6d8ff4d-mc7ng\" (UID: \"6d09daa5-3680-44fd-88d8-a2a333e97e31\") " pod="kube-system/coredns-7db6d8ff4d-mc7ng" Jan 13 21:19:41.060495 kubelet[2797]: I0113 21:19:41.060473 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9855c\" (UniqueName: \"kubernetes.io/projected/4094d46d-3bd9-4482-9e42-9463597eb69a-kube-api-access-9855c\") pod \"calico-apiserver-dd958b664-7knlx\" (UID: \"4094d46d-3bd9-4482-9e42-9463597eb69a\") " pod="calico-apiserver/calico-apiserver-dd958b664-7knlx" Jan 13 21:19:41.060495 kubelet[2797]: I0113 21:19:41.060494 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4094d46d-3bd9-4482-9e42-9463597eb69a-calico-apiserver-certs\") pod \"calico-apiserver-dd958b664-7knlx\" (UID: \"4094d46d-3bd9-4482-9e42-9463597eb69a\") " pod="calico-apiserver/calico-apiserver-dd958b664-7knlx" Jan 13 21:19:41.060723 kubelet[2797]: I0113 21:19:41.060506 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9b19419-34c4-46dc-abe6-e6b8c9ad67ed-config-volume\") pod \"coredns-7db6d8ff4d-c5tjn\" (UID: \"c9b19419-34c4-46dc-abe6-e6b8c9ad67ed\") " pod="kube-system/coredns-7db6d8ff4d-c5tjn" Jan 13 21:19:41.060723 kubelet[2797]: I0113 21:19:41.060517 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrvf\" (UniqueName: \"kubernetes.io/projected/c9b19419-34c4-46dc-abe6-e6b8c9ad67ed-kube-api-access-cqrvf\") pod \"coredns-7db6d8ff4d-c5tjn\" (UID: \"c9b19419-34c4-46dc-abe6-e6b8c9ad67ed\") " pod="kube-system/coredns-7db6d8ff4d-c5tjn" Jan 13 21:19:41.060723 kubelet[2797]: I0113 21:19:41.060531 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/001785fd-8c63-40ac-bd0e-8041100fbfa9-calico-apiserver-certs\") pod \"calico-apiserver-dd958b664-5xkxk\" (UID: \"001785fd-8c63-40ac-bd0e-8041100fbfa9\") " pod="calico-apiserver/calico-apiserver-dd958b664-5xkxk" Jan 13 21:19:41.060723 kubelet[2797]: I0113 21:19:41.060541 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq9fw\" (UniqueName: \"kubernetes.io/projected/001785fd-8c63-40ac-bd0e-8041100fbfa9-kube-api-access-fq9fw\") pod \"calico-apiserver-dd958b664-5xkxk\" (UID: \"001785fd-8c63-40ac-bd0e-8041100fbfa9\") " pod="calico-apiserver/calico-apiserver-dd958b664-5xkxk" Jan 13 21:19:41.060723 kubelet[2797]: I0113 21:19:41.060551 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86002476-5cad-437c-a040-0e9e7c4b4ce5-tigera-ca-bundle\") pod \"calico-kube-controllers-8d9dc5c7-xp85p\" (UID: \"86002476-5cad-437c-a040-0e9e7c4b4ce5\") " pod="calico-system/calico-kube-controllers-8d9dc5c7-xp85p" Jan 13 21:19:41.268488 containerd[1541]: time="2025-01-13T21:19:41.268463568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mc7ng,Uid:6d09daa5-3680-44fd-88d8-a2a333e97e31,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:41.274597 containerd[1541]: time="2025-01-13T21:19:41.274142352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d9dc5c7-xp85p,Uid:86002476-5cad-437c-a040-0e9e7c4b4ce5,Namespace:calico-system,Attempt:0,}" Jan 13 21:19:41.278383 containerd[1541]: time="2025-01-13T21:19:41.278349545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c5tjn,Uid:c9b19419-34c4-46dc-abe6-e6b8c9ad67ed,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:41.286794 containerd[1541]: time="2025-01-13T21:19:41.286276520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd958b664-7knlx,Uid:4094d46d-3bd9-4482-9e42-9463597eb69a,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:19:41.293686 containerd[1541]: time="2025-01-13T21:19:41.293668056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd958b664-5xkxk,Uid:001785fd-8c63-40ac-bd0e-8041100fbfa9,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:19:41.525759 containerd[1541]: time="2025-01-13T21:19:41.525725439Z" level=error msg="Failed to destroy network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.527259 containerd[1541]: time="2025-01-13T21:19:41.525873336Z" level=error msg="Failed to destroy network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.528107 containerd[1541]: time="2025-01-13T21:19:41.527971881Z" level=error msg="encountered an error cleaning up failed sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.528107 containerd[1541]: time="2025-01-13T21:19:41.528015002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d9dc5c7-xp85p,Uid:86002476-5cad-437c-a040-0e9e7c4b4ce5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.528268 containerd[1541]: time="2025-01-13T21:19:41.528255594Z" level=error msg="encountered an error cleaning up failed sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.528342 containerd[1541]: time="2025-01-13T21:19:41.528319024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd958b664-7knlx,Uid:4094d46d-3bd9-4482-9e42-9463597eb69a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.532355 containerd[1541]: time="2025-01-13T21:19:41.532341061Z" level=error msg="Failed to destroy network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.532516 containerd[1541]: time="2025-01-13T21:19:41.532473991Z" level=error msg="Failed to destroy network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.532751 containerd[1541]: time="2025-01-13T21:19:41.532695959Z" level=error msg="encountered an error cleaning up failed sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.532751 containerd[1541]: time="2025-01-13T21:19:41.532719168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c5tjn,Uid:c9b19419-34c4-46dc-abe6-e6b8c9ad67ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.532992 containerd[1541]: time="2025-01-13T21:19:41.532980068Z" level=error msg="encountered an error cleaning up failed sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.533392 containerd[1541]: time="2025-01-13T21:19:41.533176165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd958b664-5xkxk,Uid:001785fd-8c63-40ac-bd0e-8041100fbfa9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.535046 containerd[1541]: time="2025-01-13T21:19:41.534990958Z" level=error msg="Failed to destroy network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.535219 containerd[1541]: time="2025-01-13T21:19:41.535206646Z" level=error msg="encountered an error cleaning up failed sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.535324 containerd[1541]: time="2025-01-13T21:19:41.535275024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mc7ng,Uid:6d09daa5-3680-44fd-88d8-a2a333e97e31,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.536786 kubelet[2797]: E0113 21:19:41.532637 2797 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.536786 kubelet[2797]: E0113 21:19:41.536550 2797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8d9dc5c7-xp85p" Jan 13 21:19:41.536786 kubelet[2797]: E0113 21:19:41.536564 2797 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8d9dc5c7-xp85p" Jan 13 21:19:41.536786 kubelet[2797]: E0113 21:19:41.536573 2797 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.536879 kubelet[2797]: E0113 21:19:41.536595 2797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mc7ng" Jan 13 21:19:41.536879 kubelet[2797]: E0113 21:19:41.536607 2797 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mc7ng" Jan 13 21:19:41.536879 kubelet[2797]: E0113 21:19:41.536625 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-mc7ng_kube-system(6d09daa5-3680-44fd-88d8-a2a333e97e31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-mc7ng_kube-system(6d09daa5-3680-44fd-88d8-a2a333e97e31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mc7ng" podUID="6d09daa5-3680-44fd-88d8-a2a333e97e31" Jan 13 21:19:41.536992 kubelet[2797]: E0113 21:19:41.536647 2797 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.536992 kubelet[2797]: E0113 21:19:41.536658 2797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-c5tjn" Jan 13 21:19:41.536992 kubelet[2797]: E0113 21:19:41.536666 2797 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-c5tjn" Jan 13 21:19:41.537048 kubelet[2797]: E0113 21:19:41.536678 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-c5tjn_kube-system(c9b19419-34c4-46dc-abe6-e6b8c9ad67ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-c5tjn_kube-system(c9b19419-34c4-46dc-abe6-e6b8c9ad67ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-c5tjn" podUID="c9b19419-34c4-46dc-abe6-e6b8c9ad67ed" Jan 13 21:19:41.537048 kubelet[2797]: E0113 21:19:41.536694 2797 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.537048 kubelet[2797]: E0113 21:19:41.536704 2797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd958b664-5xkxk" Jan 13 21:19:41.537118 kubelet[2797]: E0113 21:19:41.536713 2797 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd958b664-5xkxk" Jan 13 21:19:41.537118 kubelet[2797]: E0113 21:19:41.536725 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd958b664-5xkxk_calico-apiserver(001785fd-8c63-40ac-bd0e-8041100fbfa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd958b664-5xkxk_calico-apiserver(001785fd-8c63-40ac-bd0e-8041100fbfa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd958b664-5xkxk" podUID="001785fd-8c63-40ac-bd0e-8041100fbfa9" Jan 13 21:19:41.537171 kubelet[2797]: E0113 21:19:41.536592 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8d9dc5c7-xp85p_calico-system(86002476-5cad-437c-a040-0e9e7c4b4ce5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8d9dc5c7-xp85p_calico-system(86002476-5cad-437c-a040-0e9e7c4b4ce5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8d9dc5c7-xp85p" podUID="86002476-5cad-437c-a040-0e9e7c4b4ce5" Jan 13 21:19:41.537171 kubelet[2797]: E0113 21:19:41.532542 2797 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:41.537171 kubelet[2797]: E0113 21:19:41.536744 2797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd958b664-7knlx" Jan 13 21:19:41.537244 kubelet[2797]: E0113 21:19:41.536757 2797 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd958b664-7knlx" Jan 13 21:19:41.537244 kubelet[2797]: E0113 21:19:41.536772 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd958b664-7knlx_calico-apiserver(4094d46d-3bd9-4482-9e42-9463597eb69a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd958b664-7knlx_calico-apiserver(4094d46d-3bd9-4482-9e42-9463597eb69a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd958b664-7knlx" podUID="4094d46d-3bd9-4482-9e42-9463597eb69a" Jan 13 21:19:41.886315 kubelet[2797]: I0113 21:19:41.886296 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:19:41.887843 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31-shm.mount: Deactivated successfully. Jan 13 21:19:41.888562 kubelet[2797]: I0113 21:19:41.888406 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:19:41.891410 kubelet[2797]: I0113 21:19:41.891400 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:19:41.899384 containerd[1541]: time="2025-01-13T21:19:41.897241752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:19:41.900126 kubelet[2797]: I0113 21:19:41.900113 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:19:41.902200 kubelet[2797]: I0113 21:19:41.902189 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:19:41.939088 containerd[1541]: time="2025-01-13T21:19:41.938899125Z" level=info msg="StopPodSandbox for \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\"" Jan 13 21:19:41.939415 containerd[1541]: time="2025-01-13T21:19:41.939313763Z" level=info msg="StopPodSandbox for \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\"" Jan 13 21:19:41.940164 containerd[1541]: time="2025-01-13T21:19:41.940013704Z" level=info msg="Ensure that sandbox b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd in task-service has been cleanup successfully" Jan 13 21:19:41.940209 containerd[1541]: time="2025-01-13T21:19:41.940199328Z" level=info msg="Ensure that sandbox 76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31 in task-service has been cleanup successfully" Jan 13 21:19:41.940632 containerd[1541]: time="2025-01-13T21:19:41.940620551Z" level=info msg="StopPodSandbox for \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\"" Jan 13 21:19:41.941390 containerd[1541]: time="2025-01-13T21:19:41.940925816Z" level=info msg="Ensure that sandbox 022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15 in task-service has been cleanup successfully" Jan 13 21:19:41.941546 containerd[1541]: time="2025-01-13T21:19:41.940705547Z" level=info msg="StopPodSandbox for \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\"" Jan 13 21:19:41.941670 containerd[1541]: time="2025-01-13T21:19:41.941659877Z" level=info msg="Ensure that sandbox dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad in task-service has been cleanup successfully" Jan 13 21:19:41.943502 containerd[1541]: time="2025-01-13T21:19:41.940779389Z" level=info msg="StopPodSandbox for \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\"" Jan 13 21:19:41.943645 containerd[1541]: time="2025-01-13T21:19:41.943633767Z" level=info msg="Ensure that sandbox fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d in task-service has been cleanup successfully" Jan 13 21:19:42.002218 containerd[1541]: time="2025-01-13T21:19:42.002188151Z" level=error msg="StopPodSandbox for \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\" failed" error="failed to destroy network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:42.002423 kubelet[2797]: E0113 21:19:42.002398 2797 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:19:42.003282 containerd[1541]: time="2025-01-13T21:19:42.003259622Z" level=error msg="StopPodSandbox for \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\" failed" error="failed to destroy network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:42.003451 kubelet[2797]: E0113 21:19:42.003353 2797 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:19:42.004319 kubelet[2797]: E0113 21:19:42.003398 2797 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad"} Jan 13 21:19:42.004319 kubelet[2797]: E0113 21:19:42.004099 2797 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"001785fd-8c63-40ac-bd0e-8041100fbfa9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:19:42.004319 kubelet[2797]: E0113 21:19:42.004114 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"001785fd-8c63-40ac-bd0e-8041100fbfa9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd958b664-5xkxk" podUID="001785fd-8c63-40ac-bd0e-8041100fbfa9" Jan 13 21:19:42.004319 kubelet[2797]: E0113 21:19:42.002436 2797 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15"} Jan 13 21:19:42.004769 kubelet[2797]: E0113 21:19:42.004228 2797 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86002476-5cad-437c-a040-0e9e7c4b4ce5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:19:42.004769 kubelet[2797]: E0113 21:19:42.004244 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86002476-5cad-437c-a040-0e9e7c4b4ce5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8d9dc5c7-xp85p" podUID="86002476-5cad-437c-a040-0e9e7c4b4ce5" Jan 13 21:19:42.007748 containerd[1541]: time="2025-01-13T21:19:42.007238569Z" level=error msg="StopPodSandbox for \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\" failed" error="failed to destroy network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:42.007841 kubelet[2797]: E0113 21:19:42.007828 2797 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:19:42.007885 kubelet[2797]: E0113 21:19:42.007877 2797 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31"} Jan 13 21:19:42.007929 kubelet[2797]: E0113 21:19:42.007921 2797 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d09daa5-3680-44fd-88d8-a2a333e97e31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:19:42.007992 kubelet[2797]: E0113 21:19:42.007980 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d09daa5-3680-44fd-88d8-a2a333e97e31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mc7ng" podUID="6d09daa5-3680-44fd-88d8-a2a333e97e31" Jan 13 21:19:42.011885 containerd[1541]: time="2025-01-13T21:19:42.011868134Z" level=error msg="StopPodSandbox for \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\" failed" error="failed to destroy network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:42.012022 kubelet[2797]: E0113 21:19:42.012002 2797 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:19:42.012070 kubelet[2797]: E0113 21:19:42.012025 2797 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d"} Jan 13 21:19:42.012070 kubelet[2797]: E0113 21:19:42.012043 2797 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9b19419-34c4-46dc-abe6-e6b8c9ad67ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:19:42.012070 kubelet[2797]: E0113 21:19:42.012054 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9b19419-34c4-46dc-abe6-e6b8c9ad67ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-c5tjn" podUID="c9b19419-34c4-46dc-abe6-e6b8c9ad67ed" Jan 13 21:19:42.014038 containerd[1541]: time="2025-01-13T21:19:42.014025069Z" level=error msg="StopPodSandbox for \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\" failed" error="failed to destroy network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:42.014174 kubelet[2797]: E0113 21:19:42.014145 2797 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:19:42.014221 kubelet[2797]: E0113 21:19:42.014213 2797 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd"} Jan 13 21:19:42.014295 kubelet[2797]: E0113 21:19:42.014269 2797 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4094d46d-3bd9-4482-9e42-9463597eb69a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:19:42.014295 kubelet[2797]: E0113 21:19:42.014283 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4094d46d-3bd9-4482-9e42-9463597eb69a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd958b664-7knlx" podUID="4094d46d-3bd9-4482-9e42-9463597eb69a" Jan 13 21:19:42.781464 systemd[1]: Created slice kubepods-besteffort-pod67967985_cb3a_4c06_87e4_f05e417d0670.slice - libcontainer container kubepods-besteffort-pod67967985_cb3a_4c06_87e4_f05e417d0670.slice. Jan 13 21:19:42.794104 containerd[1541]: time="2025-01-13T21:19:42.794065270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-459cr,Uid:67967985-cb3a-4c06-87e4-f05e417d0670,Namespace:calico-system,Attempt:0,}" Jan 13 21:19:42.943170 containerd[1541]: time="2025-01-13T21:19:42.941702812Z" level=error msg="Failed to destroy network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:42.943170 containerd[1541]: time="2025-01-13T21:19:42.943089396Z" level=error msg="encountered an error cleaning up failed sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:42.943170 containerd[1541]: time="2025-01-13T21:19:42.943127937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-459cr,Uid:67967985-cb3a-4c06-87e4-f05e417d0670,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:42.942964 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8-shm.mount: Deactivated successfully. Jan 13 21:19:42.943556 kubelet[2797]: E0113 21:19:42.943250 2797 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:42.943556 kubelet[2797]: E0113 21:19:42.943283 2797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-459cr" Jan 13 21:19:42.943556 kubelet[2797]: E0113 21:19:42.943298 2797 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-459cr" Jan 13 21:19:42.943707 kubelet[2797]: E0113 21:19:42.943323 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-459cr_calico-system(67967985-cb3a-4c06-87e4-f05e417d0670)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-459cr_calico-system(67967985-cb3a-4c06-87e4-f05e417d0670)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-459cr" podUID="67967985-cb3a-4c06-87e4-f05e417d0670" Jan 13 21:19:43.905691 kubelet[2797]: I0113 21:19:43.905625 2797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:19:43.906256 containerd[1541]: time="2025-01-13T21:19:43.906110477Z" level=info msg="StopPodSandbox for \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\"" Jan 13 21:19:43.911627 containerd[1541]: time="2025-01-13T21:19:43.911503241Z" level=info msg="Ensure that sandbox c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8 in task-service has been cleanup successfully" Jan 13 21:19:43.975746 containerd[1541]: time="2025-01-13T21:19:43.975716903Z" level=error msg="StopPodSandbox for \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\" failed" error="failed to destroy network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:19:43.975981 kubelet[2797]: E0113 21:19:43.975866 2797 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:19:43.975981 kubelet[2797]: E0113 21:19:43.975897 2797 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8"} Jan 13 21:19:43.975981 kubelet[2797]: E0113 21:19:43.975919 2797 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67967985-cb3a-4c06-87e4-f05e417d0670\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:19:43.975981 kubelet[2797]: E0113 21:19:43.975934 2797 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67967985-cb3a-4c06-87e4-f05e417d0670\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-459cr" podUID="67967985-cb3a-4c06-87e4-f05e417d0670" Jan 13 21:19:46.693711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068347665.mount: Deactivated successfully. Jan 13 21:19:46.804712 containerd[1541]: time="2025-01-13T21:19:46.801130229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:19:46.810362 containerd[1541]: time="2025-01-13T21:19:46.810103139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.908137261s" Jan 13 21:19:46.810362 containerd[1541]: time="2025-01-13T21:19:46.810128416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:19:46.817255 containerd[1541]: time="2025-01-13T21:19:46.817219469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:46.842550 containerd[1541]: time="2025-01-13T21:19:46.841504331Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:46.844292 containerd[1541]: time="2025-01-13T21:19:46.844268693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:46.876595 containerd[1541]: time="2025-01-13T21:19:46.876556474Z" level=info msg="CreateContainer within sandbox \"b5cc9564a98b8f479fe8cb12e4deda2cd8113b636ca75b88294b34f8f24d073e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:19:46.906901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount388026871.mount: Deactivated successfully. Jan 13 21:19:46.913878 containerd[1541]: time="2025-01-13T21:19:46.913847315Z" level=info msg="CreateContainer within sandbox \"b5cc9564a98b8f479fe8cb12e4deda2cd8113b636ca75b88294b34f8f24d073e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a7a9460929ee6341af09e3c7a604c4a9fb596de75a26f96a1ecae5c74bd57f24\"" Jan 13 21:19:46.919386 containerd[1541]: time="2025-01-13T21:19:46.919230545Z" level=info msg="StartContainer for \"a7a9460929ee6341af09e3c7a604c4a9fb596de75a26f96a1ecae5c74bd57f24\"" Jan 13 21:19:47.030470 systemd[1]: Started cri-containerd-a7a9460929ee6341af09e3c7a604c4a9fb596de75a26f96a1ecae5c74bd57f24.scope - libcontainer container a7a9460929ee6341af09e3c7a604c4a9fb596de75a26f96a1ecae5c74bd57f24. Jan 13 21:19:47.053681 containerd[1541]: time="2025-01-13T21:19:47.053657288Z" level=info msg="StartContainer for \"a7a9460929ee6341af09e3c7a604c4a9fb596de75a26f96a1ecae5c74bd57f24\" returns successfully" Jan 13 21:19:47.256470 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:19:47.258981 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:19:53.778324 containerd[1541]: time="2025-01-13T21:19:53.778289385Z" level=info msg="StopPodSandbox for \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\"" Jan 13 21:19:53.778662 containerd[1541]: time="2025-01-13T21:19:53.778564988Z" level=info msg="StopPodSandbox for \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\"" Jan 13 21:19:53.852538 kubelet[2797]: I0113 21:19:53.843687 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6574n" podStartSLOduration=8.497150253 podStartE2EDuration="21.828915577s" podCreationTimestamp="2025-01-13 21:19:32 +0000 UTC" firstStartedPulling="2025-01-13 21:19:33.482124506 +0000 UTC m=+20.817730274" lastFinishedPulling="2025-01-13 21:19:46.813889831 +0000 UTC m=+34.149495598" observedRunningTime="2025-01-13 21:19:47.996897974 +0000 UTC m=+35.332503744" watchObservedRunningTime="2025-01-13 21:19:53.828915577 +0000 UTC m=+41.164521358" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:53.827 [INFO][4015] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:53.827 [INFO][4015] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" iface="eth0" netns="/var/run/netns/cni-f736563c-12cf-21ca-5fcf-3519b85930fb" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:53.828 [INFO][4015] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" iface="eth0" netns="/var/run/netns/cni-f736563c-12cf-21ca-5fcf-3519b85930fb" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:53.829 [INFO][4015] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" iface="eth0" netns="/var/run/netns/cni-f736563c-12cf-21ca-5fcf-3519b85930fb" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:53.829 [INFO][4015] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:53.829 [INFO][4015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:54.027 [INFO][4026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" HandleID="k8s-pod-network.76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:54.029 [INFO][4026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:54.030 [INFO][4026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:54.040 [WARNING][4026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" HandleID="k8s-pod-network.76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:54.040 [INFO][4026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" HandleID="k8s-pod-network.76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:54.041 [INFO][4026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:54.044830 containerd[1541]: 2025-01-13 21:19:54.043 [INFO][4015] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:19:54.046472 systemd[1]: run-netns-cni\x2df736563c\x2d12cf\x2d21ca\x2d5fcf\x2d3519b85930fb.mount: Deactivated successfully. Jan 13 21:19:54.050043 containerd[1541]: time="2025-01-13T21:19:54.050014036Z" level=info msg="TearDown network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\" successfully" Jan 13 21:19:54.050141 containerd[1541]: time="2025-01-13T21:19:54.050045037Z" level=info msg="StopPodSandbox for \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\" returns successfully" Jan 13 21:19:54.050640 containerd[1541]: time="2025-01-13T21:19:54.050622526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mc7ng,Uid:6d09daa5-3680-44fd-88d8-a2a333e97e31,Namespace:kube-system,Attempt:1,}" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:53.826 [INFO][4014] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:53.827 [INFO][4014] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" iface="eth0" netns="/var/run/netns/cni-80ced961-67fb-e494-b267-54cef3093fb0" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:53.827 [INFO][4014] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" iface="eth0" netns="/var/run/netns/cni-80ced961-67fb-e494-b267-54cef3093fb0" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:53.829 [INFO][4014] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" iface="eth0" netns="/var/run/netns/cni-80ced961-67fb-e494-b267-54cef3093fb0" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:53.829 [INFO][4014] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:53.829 [INFO][4014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:54.027 [INFO][4027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" HandleID="k8s-pod-network.b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:54.029 [INFO][4027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:54.041 [INFO][4027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:54.048 [WARNING][4027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" HandleID="k8s-pod-network.b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:54.048 [INFO][4027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" HandleID="k8s-pod-network.b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:54.050 [INFO][4027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:54.054763 containerd[1541]: 2025-01-13 21:19:54.052 [INFO][4014] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:19:54.054763 containerd[1541]: time="2025-01-13T21:19:54.053522226Z" level=info msg="TearDown network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\" successfully" Jan 13 21:19:54.054763 containerd[1541]: time="2025-01-13T21:19:54.053532763Z" level=info msg="StopPodSandbox for \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\" returns successfully" Jan 13 21:19:54.054751 systemd[1]: run-netns-cni\x2d80ced961\x2d67fb\x2de494\x2db267\x2d54cef3093fb0.mount: Deactivated successfully. Jan 13 21:19:54.057474 containerd[1541]: time="2025-01-13T21:19:54.057241228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd958b664-7knlx,Uid:4094d46d-3bd9-4482-9e42-9463597eb69a,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:19:54.194678 systemd-networkd[1447]: cali63cae819164: Link UP Jan 13 21:19:54.195267 systemd-networkd[1447]: cali63cae819164: Gained carrier Jan 13 21:19:54.225405 systemd-networkd[1447]: cali9c1649e9d4a: Link UP Jan 13 21:19:54.225851 systemd-networkd[1447]: cali9c1649e9d4a: Gained carrier Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.093 [INFO][4052] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.103 [INFO][4052] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0 calico-apiserver-dd958b664- calico-apiserver 4094d46d-3bd9-4482-9e42-9463597eb69a 725 0 2025-01-13 21:19:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd958b664 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dd958b664-7knlx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9c1649e9d4a [] []}} ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-7knlx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--7knlx-" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.103 [INFO][4052] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-7knlx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.150 [INFO][4083] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" HandleID="k8s-pod-network.d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.162 [INFO][4083] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" HandleID="k8s-pod-network.d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000101540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dd958b664-7knlx", "timestamp":"2025-01-13 21:19:54.15032124 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.162 [INFO][4083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.178 [INFO][4083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.178 [INFO][4083] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.180 [INFO][4083] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" host="localhost" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.183 [INFO][4083] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.186 [INFO][4083] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.188 [INFO][4083] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.190 [INFO][4083] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.190 [INFO][4083] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" host="localhost" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.191 [INFO][4083] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7 Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.198 [INFO][4083] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" host="localhost" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.222 [INFO][4083] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" host="localhost" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.222 [INFO][4083] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" host="localhost" Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.222 [INFO][4083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:54.249128 containerd[1541]: 2025-01-13 21:19:54.222 [INFO][4083] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" HandleID="k8s-pod-network.d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.263212 containerd[1541]: 2025-01-13 21:19:54.223 [INFO][4052] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-7knlx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0", GenerateName:"calico-apiserver-dd958b664-", Namespace:"calico-apiserver", SelfLink:"", UID:"4094d46d-3bd9-4482-9e42-9463597eb69a", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd958b664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dd958b664-7knlx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c1649e9d4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:54.263212 containerd[1541]: 2025-01-13 21:19:54.223 [INFO][4052] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-7knlx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.263212 containerd[1541]: 2025-01-13 21:19:54.223 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c1649e9d4a ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-7knlx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.263212 containerd[1541]: 2025-01-13 21:19:54.225 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-7knlx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.263212 containerd[1541]: 2025-01-13 21:19:54.226 [INFO][4052] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-7knlx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0", GenerateName:"calico-apiserver-dd958b664-", Namespace:"calico-apiserver", SelfLink:"", UID:"4094d46d-3bd9-4482-9e42-9463597eb69a", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd958b664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7", Pod:"calico-apiserver-dd958b664-7knlx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c1649e9d4a", MAC:"72:cf:58:a4:ff:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:54.263212 containerd[1541]: 2025-01-13 21:19:54.247 [INFO][4052] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-7knlx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:19:54.263212 containerd[1541]: 2025-01-13 21:19:54.094 [INFO][4044] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.105 [INFO][4044] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0 coredns-7db6d8ff4d- kube-system 6d09daa5-3680-44fd-88d8-a2a333e97e31 726 0 2025-01-13 21:19:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-mc7ng eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63cae819164 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mc7ng" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mc7ng-" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.105 [INFO][4044] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mc7ng" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.147 [INFO][4082] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" HandleID="k8s-pod-network.768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.156 [INFO][4082] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" HandleID="k8s-pod-network.768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003187f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-mc7ng", "timestamp":"2025-01-13 21:19:54.147311557 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.156 [INFO][4082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.156 [INFO][4082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.156 [INFO][4082] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.158 [INFO][4082] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" host="localhost" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.166 [INFO][4082] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.168 [INFO][4082] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.169 [INFO][4082] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.171 [INFO][4082] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.172 [INFO][4082] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" host="localhost" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.174 [INFO][4082] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.176 [INFO][4082] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" host="localhost" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.178 [INFO][4082] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" host="localhost" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.178 [INFO][4082] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" host="localhost" Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.178 [INFO][4082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:54.263498 containerd[1541]: 2025-01-13 21:19:54.178 [INFO][4082] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" HandleID="k8s-pod-network.768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.269534 containerd[1541]: 2025-01-13 21:19:54.180 [INFO][4044] cni-plugin/k8s.go 386: Populated endpoint ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mc7ng" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6d09daa5-3680-44fd-88d8-a2a333e97e31", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-mc7ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63cae819164", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:54.269534 containerd[1541]: 2025-01-13 21:19:54.180 [INFO][4044] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mc7ng" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.269534 containerd[1541]: 2025-01-13 21:19:54.180 [INFO][4044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63cae819164 ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mc7ng" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.269534 containerd[1541]: 2025-01-13 21:19:54.195 [INFO][4044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mc7ng" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.269534 containerd[1541]: 2025-01-13 21:19:54.195 [INFO][4044] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mc7ng" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6d09daa5-3680-44fd-88d8-a2a333e97e31", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb", Pod:"coredns-7db6d8ff4d-mc7ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63cae819164", MAC:"36:ae:42:43:26:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:54.269534 containerd[1541]: 2025-01-13 21:19:54.247 [INFO][4044] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mc7ng" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:19:54.306231 containerd[1541]: time="2025-01-13T21:19:54.305929269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:54.306231 containerd[1541]: time="2025-01-13T21:19:54.306087864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:54.306231 containerd[1541]: time="2025-01-13T21:19:54.306102024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:54.312188 containerd[1541]: time="2025-01-13T21:19:54.306593830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:54.312188 containerd[1541]: time="2025-01-13T21:19:54.309989408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:54.312188 containerd[1541]: time="2025-01-13T21:19:54.310027222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:54.312188 containerd[1541]: time="2025-01-13T21:19:54.310039022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:54.312188 containerd[1541]: time="2025-01-13T21:19:54.310220471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:54.325453 systemd[1]: Started cri-containerd-d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7.scope - libcontainer container d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7. Jan 13 21:19:54.327573 systemd[1]: Started cri-containerd-768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb.scope - libcontainer container 768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb. Jan 13 21:19:54.336168 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:54.337432 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:54.377090 containerd[1541]: time="2025-01-13T21:19:54.377050302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mc7ng,Uid:6d09daa5-3680-44fd-88d8-a2a333e97e31,Namespace:kube-system,Attempt:1,} returns sandbox id \"768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb\"" Jan 13 21:19:54.377180 containerd[1541]: time="2025-01-13T21:19:54.377167080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd958b664-7knlx,Uid:4094d46d-3bd9-4482-9e42-9463597eb69a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7\"" Jan 13 21:19:54.378265 containerd[1541]: time="2025-01-13T21:19:54.378250751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:19:54.388757 containerd[1541]: time="2025-01-13T21:19:54.388729106Z" level=info msg="CreateContainer within sandbox \"768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:19:54.568785 containerd[1541]: time="2025-01-13T21:19:54.568718892Z" level=info msg="CreateContainer within sandbox \"768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f733909f27febe72cb942cfe57cd019dca187aff3c12a49937e216a22c0129a\"" Jan 13 21:19:54.569618 containerd[1541]: time="2025-01-13T21:19:54.569503096Z" level=info msg="StartContainer for \"5f733909f27febe72cb942cfe57cd019dca187aff3c12a49937e216a22c0129a\"" Jan 13 21:19:54.591487 systemd[1]: Started cri-containerd-5f733909f27febe72cb942cfe57cd019dca187aff3c12a49937e216a22c0129a.scope - libcontainer container 5f733909f27febe72cb942cfe57cd019dca187aff3c12a49937e216a22c0129a. Jan 13 21:19:54.748528 containerd[1541]: time="2025-01-13T21:19:54.748497337Z" level=info msg="StartContainer for \"5f733909f27febe72cb942cfe57cd019dca187aff3c12a49937e216a22c0129a\" returns successfully" Jan 13 21:19:54.778299 containerd[1541]: time="2025-01-13T21:19:54.778018904Z" level=info msg="StopPodSandbox for \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\"" Jan 13 21:19:54.778551 containerd[1541]: time="2025-01-13T21:19:54.778534321Z" level=info msg="StopPodSandbox for \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\"" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.817 [INFO][4261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.818 [INFO][4261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" iface="eth0" netns="/var/run/netns/cni-8a0da78d-10c8-717c-a4b3-b094e2aa1b16" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.818 [INFO][4261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" iface="eth0" netns="/var/run/netns/cni-8a0da78d-10c8-717c-a4b3-b094e2aa1b16" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.818 [INFO][4261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" iface="eth0" netns="/var/run/netns/cni-8a0da78d-10c8-717c-a4b3-b094e2aa1b16" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.818 [INFO][4261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.819 [INFO][4261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.837 [INFO][4273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" HandleID="k8s-pod-network.fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.837 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.837 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.845 [WARNING][4273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" HandleID="k8s-pod-network.fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.845 [INFO][4273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" HandleID="k8s-pod-network.fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.847 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:54.850534 containerd[1541]: 2025-01-13 21:19:54.848 [INFO][4261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:19:54.850534 containerd[1541]: time="2025-01-13T21:19:54.850511612Z" level=info msg="TearDown network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\" successfully" Jan 13 21:19:54.850534 containerd[1541]: time="2025-01-13T21:19:54.850526055Z" level=info msg="StopPodSandbox for \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\" returns successfully" Jan 13 21:19:54.851741 containerd[1541]: time="2025-01-13T21:19:54.851680771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c5tjn,Uid:c9b19419-34c4-46dc-abe6-e6b8c9ad67ed,Namespace:kube-system,Attempt:1,}" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.824 [INFO][4257] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.824 [INFO][4257] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" iface="eth0" netns="/var/run/netns/cni-eb535e81-42db-8322-3f4c-b467d2d76ef2" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.824 [INFO][4257] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" iface="eth0" netns="/var/run/netns/cni-eb535e81-42db-8322-3f4c-b467d2d76ef2" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.824 [INFO][4257] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" iface="eth0" netns="/var/run/netns/cni-eb535e81-42db-8322-3f4c-b467d2d76ef2" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.824 [INFO][4257] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.825 [INFO][4257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.847 [INFO][4277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" HandleID="k8s-pod-network.c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.847 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.847 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.851 [WARNING][4277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" HandleID="k8s-pod-network.c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.851 [INFO][4277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" HandleID="k8s-pod-network.c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.853 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:54.855198 containerd[1541]: 2025-01-13 21:19:54.854 [INFO][4257] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:19:54.858117 containerd[1541]: time="2025-01-13T21:19:54.855277809Z" level=info msg="TearDown network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\" successfully" Jan 13 21:19:54.858117 containerd[1541]: time="2025-01-13T21:19:54.855289826Z" level=info msg="StopPodSandbox for \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\" returns successfully" Jan 13 21:19:54.858117 containerd[1541]: time="2025-01-13T21:19:54.855600937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-459cr,Uid:67967985-cb3a-4c06-87e4-f05e417d0670,Namespace:calico-system,Attempt:1,}" Jan 13 21:19:54.944565 systemd-networkd[1447]: calia91cda7ee6f: Link UP Jan 13 21:19:54.945189 systemd-networkd[1447]: calia91cda7ee6f: Gained carrier Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.881 [INFO][4298] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.888 [INFO][4298] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--459cr-eth0 csi-node-driver- calico-system 67967985-cb3a-4c06-87e4-f05e417d0670 742 0 2025-01-13 21:19:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-459cr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia91cda7ee6f [] []}} ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Namespace="calico-system" Pod="csi-node-driver-459cr" WorkloadEndpoint="localhost-k8s-csi--node--driver--459cr-" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.888 [INFO][4298] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Namespace="calico-system" Pod="csi-node-driver-459cr" WorkloadEndpoint="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.909 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" HandleID="k8s-pod-network.de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.922 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" HandleID="k8s-pod-network.de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Workload="localhost-k8s-csi--node--driver--459cr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-459cr", "timestamp":"2025-01-13 21:19:54.909015048 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.922 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.922 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.922 [INFO][4312] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.924 [INFO][4312] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" host="localhost" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.928 [INFO][4312] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.932 [INFO][4312] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.933 [INFO][4312] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.934 [INFO][4312] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.934 [INFO][4312] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" host="localhost" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.935 [INFO][4312] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.937 [INFO][4312] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" host="localhost" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.941 [INFO][4312] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" host="localhost" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.941 [INFO][4312] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" host="localhost" Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.941 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:54.961201 containerd[1541]: 2025-01-13 21:19:54.941 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" HandleID="k8s-pod-network.de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.961731 containerd[1541]: 2025-01-13 21:19:54.942 [INFO][4298] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Namespace="calico-system" Pod="csi-node-driver-459cr" WorkloadEndpoint="localhost-k8s-csi--node--driver--459cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--459cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67967985-cb3a-4c06-87e4-f05e417d0670", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-459cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia91cda7ee6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:54.961731 containerd[1541]: 2025-01-13 21:19:54.943 [INFO][4298] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Namespace="calico-system" Pod="csi-node-driver-459cr" WorkloadEndpoint="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.961731 containerd[1541]: 2025-01-13 21:19:54.943 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia91cda7ee6f ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Namespace="calico-system" Pod="csi-node-driver-459cr" WorkloadEndpoint="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.961731 containerd[1541]: 2025-01-13 21:19:54.945 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Namespace="calico-system" Pod="csi-node-driver-459cr" WorkloadEndpoint="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.961731 containerd[1541]: 2025-01-13 21:19:54.945 [INFO][4298] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Namespace="calico-system" Pod="csi-node-driver-459cr" WorkloadEndpoint="localhost-k8s-csi--node--driver--459cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--459cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67967985-cb3a-4c06-87e4-f05e417d0670", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a", Pod:"csi-node-driver-459cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia91cda7ee6f", MAC:"3e:09:29:10:a2:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:54.961731 containerd[1541]: 2025-01-13 21:19:54.957 [INFO][4298] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a" Namespace="calico-system" Pod="csi-node-driver-459cr" WorkloadEndpoint="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:19:54.979518 systemd-networkd[1447]: cali4961996756a: Link UP Jan 13 21:19:54.980646 systemd-networkd[1447]: cali4961996756a: Gained carrier Jan 13 21:19:54.991249 containerd[1541]: time="2025-01-13T21:19:54.991130819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:54.991249 containerd[1541]: time="2025-01-13T21:19:54.991167412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:54.991970 containerd[1541]: time="2025-01-13T21:19:54.991175767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:54.991970 containerd[1541]: time="2025-01-13T21:19:54.991224306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:54.994013 kubelet[2797]: I0113 21:19:54.993599 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mc7ng" podStartSLOduration=27.993586273 podStartE2EDuration="27.993586273s" podCreationTimestamp="2025-01-13 21:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:54.987482204 +0000 UTC m=+42.323087972" watchObservedRunningTime="2025-01-13 21:19:54.993586273 +0000 UTC m=+42.329192046" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.880 [INFO][4287] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.889 [INFO][4287] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0 coredns-7db6d8ff4d- kube-system c9b19419-34c4-46dc-abe6-e6b8c9ad67ed 741 0 2025-01-13 21:19:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-c5tjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4961996756a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c5tjn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c5tjn-" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.889 [INFO][4287] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c5tjn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.911 [INFO][4316] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" HandleID="k8s-pod-network.e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.927 [INFO][4316] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" HandleID="k8s-pod-network.e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318e90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-c5tjn", "timestamp":"2025-01-13 21:19:54.911706787 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.927 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.941 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.941 [INFO][4316] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.942 [INFO][4316] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" host="localhost" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.950 [INFO][4316] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.956 [INFO][4316] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.959 [INFO][4316] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.963 [INFO][4316] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.963 [INFO][4316] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" host="localhost" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.964 [INFO][4316] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.966 [INFO][4316] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" host="localhost" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.972 [INFO][4316] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" host="localhost" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.972 [INFO][4316] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" host="localhost" Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.972 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:54.996277 containerd[1541]: 2025-01-13 21:19:54.972 [INFO][4316] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" HandleID="k8s-pod-network.e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:54.997155 containerd[1541]: 2025-01-13 21:19:54.975 [INFO][4287] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c5tjn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c9b19419-34c4-46dc-abe6-e6b8c9ad67ed", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-c5tjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4961996756a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:54.997155 containerd[1541]: 2025-01-13 21:19:54.975 [INFO][4287] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c5tjn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:54.997155 containerd[1541]: 2025-01-13 21:19:54.975 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4961996756a ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c5tjn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:54.997155 containerd[1541]: 2025-01-13 21:19:54.981 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c5tjn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:54.997155 containerd[1541]: 2025-01-13 21:19:54.981 [INFO][4287] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c5tjn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c9b19419-34c4-46dc-abe6-e6b8c9ad67ed", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab", Pod:"coredns-7db6d8ff4d-c5tjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4961996756a", MAC:"1e:d3:aa:42:ff:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:54.997155 containerd[1541]: 2025-01-13 21:19:54.994 [INFO][4287] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c5tjn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:19:55.012292 systemd[1]: Started cri-containerd-de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a.scope - libcontainer container de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a. Jan 13 21:19:55.021090 containerd[1541]: time="2025-01-13T21:19:55.020814927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:55.021311 containerd[1541]: time="2025-01-13T21:19:55.021053305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:55.021633 containerd[1541]: time="2025-01-13T21:19:55.021527852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:55.021954 containerd[1541]: time="2025-01-13T21:19:55.021831134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:55.026258 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:55.037764 containerd[1541]: time="2025-01-13T21:19:55.037631166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-459cr,Uid:67967985-cb3a-4c06-87e4-f05e417d0670,Namespace:calico-system,Attempt:1,} returns sandbox id \"de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a\"" Jan 13 21:19:55.039618 systemd[1]: Started cri-containerd-e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab.scope - libcontainer container e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab. Jan 13 21:19:55.051075 systemd[1]: run-netns-cni\x2deb535e81\x2d42db\x2d8322\x2d3f4c\x2db467d2d76ef2.mount: Deactivated successfully. Jan 13 21:19:55.051128 systemd[1]: run-netns-cni\x2d8a0da78d\x2d10c8\x2d717c\x2da4b3\x2db094e2aa1b16.mount: Deactivated successfully. Jan 13 21:19:55.056678 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:55.076421 containerd[1541]: time="2025-01-13T21:19:55.076323436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c5tjn,Uid:c9b19419-34c4-46dc-abe6-e6b8c9ad67ed,Namespace:kube-system,Attempt:1,} returns sandbox id \"e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab\"" Jan 13 21:19:55.079487 containerd[1541]: time="2025-01-13T21:19:55.079438670Z" level=info msg="CreateContainer within sandbox \"e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:19:55.086570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081664585.mount: Deactivated successfully. Jan 13 21:19:55.088932 containerd[1541]: time="2025-01-13T21:19:55.088849062Z" level=info msg="CreateContainer within sandbox \"e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"881d8b2c6e2acf82375628f93635f0246a391da76926d9c2013e588f36072968\"" Jan 13 21:19:55.089380 containerd[1541]: time="2025-01-13T21:19:55.089150134Z" level=info msg="StartContainer for \"881d8b2c6e2acf82375628f93635f0246a391da76926d9c2013e588f36072968\"" Jan 13 21:19:55.112810 systemd[1]: Started cri-containerd-881d8b2c6e2acf82375628f93635f0246a391da76926d9c2013e588f36072968.scope - libcontainer container 881d8b2c6e2acf82375628f93635f0246a391da76926d9c2013e588f36072968. Jan 13 21:19:55.128927 containerd[1541]: time="2025-01-13T21:19:55.128907737Z" level=info msg="StartContainer for \"881d8b2c6e2acf82375628f93635f0246a391da76926d9c2013e588f36072968\" returns successfully" Jan 13 21:19:55.252661 systemd-networkd[1447]: cali63cae819164: Gained IPv6LL Jan 13 21:19:55.989270 kubelet[2797]: I0113 21:19:55.988708 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c5tjn" podStartSLOduration=28.988694176 podStartE2EDuration="28.988694176s" podCreationTimestamp="2025-01-13 21:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:55.987996769 +0000 UTC m=+43.323602554" watchObservedRunningTime="2025-01-13 21:19:55.988694176 +0000 UTC m=+43.324299956" Jan 13 21:19:56.019482 systemd-networkd[1447]: cali9c1649e9d4a: Gained IPv6LL Jan 13 21:19:56.531552 systemd-networkd[1447]: calia91cda7ee6f: Gained IPv6LL Jan 13 21:19:56.675397 containerd[1541]: time="2025-01-13T21:19:56.675319733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:56.676513 containerd[1541]: time="2025-01-13T21:19:56.676482267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:19:56.677445 containerd[1541]: time="2025-01-13T21:19:56.677412258Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:56.678855 containerd[1541]: time="2025-01-13T21:19:56.678832990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:56.679640 containerd[1541]: time="2025-01-13T21:19:56.679519120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.301249513s" Jan 13 21:19:56.679640 containerd[1541]: time="2025-01-13T21:19:56.679538899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:19:56.680462 containerd[1541]: time="2025-01-13T21:19:56.680443027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:19:56.682175 containerd[1541]: time="2025-01-13T21:19:56.682143447Z" level=info msg="CreateContainer within sandbox \"d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:19:56.701444 containerd[1541]: time="2025-01-13T21:19:56.701429088Z" level=info msg="CreateContainer within sandbox \"d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"40b704fc961d7f935c36f4de92f99dd0650fbad7fef93d474848232a21919568\"" Jan 13 21:19:56.701891 containerd[1541]: time="2025-01-13T21:19:56.701874562Z" level=info msg="StartContainer for \"40b704fc961d7f935c36f4de92f99dd0650fbad7fef93d474848232a21919568\"" Jan 13 21:19:56.726579 systemd[1]: Started cri-containerd-40b704fc961d7f935c36f4de92f99dd0650fbad7fef93d474848232a21919568.scope - libcontainer container 40b704fc961d7f935c36f4de92f99dd0650fbad7fef93d474848232a21919568. Jan 13 21:19:56.754670 containerd[1541]: time="2025-01-13T21:19:56.754644661Z" level=info msg="StartContainer for \"40b704fc961d7f935c36f4de92f99dd0650fbad7fef93d474848232a21919568\" returns successfully" Jan 13 21:19:56.778631 containerd[1541]: time="2025-01-13T21:19:56.777996541Z" level=info msg="StopPodSandbox for \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\"" Jan 13 21:19:56.779660 containerd[1541]: time="2025-01-13T21:19:56.779626920Z" level=info msg="StopPodSandbox for \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\"" Jan 13 21:19:56.788670 systemd-networkd[1447]: cali4961996756a: Gained IPv6LL Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.830 [INFO][4587] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.830 [INFO][4587] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" iface="eth0" netns="/var/run/netns/cni-29fd95b6-b2ff-225c-57dd-cdc8e9d3adb8" Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.830 [INFO][4587] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" iface="eth0" netns="/var/run/netns/cni-29fd95b6-b2ff-225c-57dd-cdc8e9d3adb8" Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.830 [INFO][4587] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" iface="eth0" netns="/var/run/netns/cni-29fd95b6-b2ff-225c-57dd-cdc8e9d3adb8" Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.830 [INFO][4587] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.830 [INFO][4587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.848 [INFO][4599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" HandleID="k8s-pod-network.022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.849 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.849 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.854 [WARNING][4599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" HandleID="k8s-pod-network.022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.854 [INFO][4599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" HandleID="k8s-pod-network.022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.855 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:56.862430 containerd[1541]: 2025-01-13 21:19:56.857 [INFO][4587] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:19:56.862430 containerd[1541]: time="2025-01-13T21:19:56.860411270Z" level=info msg="TearDown network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\" successfully" Jan 13 21:19:56.862430 containerd[1541]: time="2025-01-13T21:19:56.860437599Z" level=info msg="StopPodSandbox for \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\" returns successfully" Jan 13 21:19:56.862430 containerd[1541]: time="2025-01-13T21:19:56.861795270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d9dc5c7-xp85p,Uid:86002476-5cad-437c-a040-0e9e7c4b4ce5,Namespace:calico-system,Attempt:1,}" Jan 13 21:19:56.864043 systemd[1]: run-netns-cni\x2d29fd95b6\x2db2ff\x2d225c\x2d57dd\x2dcdc8e9d3adb8.mount: Deactivated successfully. Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.829 [INFO][4586] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.829 [INFO][4586] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" iface="eth0" netns="/var/run/netns/cni-f2729f95-4af4-d96c-40f5-1401c8895fd9" Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.829 [INFO][4586] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" iface="eth0" netns="/var/run/netns/cni-f2729f95-4af4-d96c-40f5-1401c8895fd9" Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.829 [INFO][4586] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" iface="eth0" netns="/var/run/netns/cni-f2729f95-4af4-d96c-40f5-1401c8895fd9" Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.829 [INFO][4586] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.829 [INFO][4586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.859 [INFO][4598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" HandleID="k8s-pod-network.dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.859 [INFO][4598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.859 [INFO][4598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.875 [WARNING][4598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" HandleID="k8s-pod-network.dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.875 [INFO][4598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" HandleID="k8s-pod-network.dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.877 [INFO][4598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:56.884616 containerd[1541]: 2025-01-13 21:19:56.880 [INFO][4586] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:19:56.887488 containerd[1541]: time="2025-01-13T21:19:56.884956040Z" level=info msg="TearDown network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\" successfully" Jan 13 21:19:56.887488 containerd[1541]: time="2025-01-13T21:19:56.884978444Z" level=info msg="StopPodSandbox for \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\" returns successfully" Jan 13 21:19:56.887488 containerd[1541]: time="2025-01-13T21:19:56.885427866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd958b664-5xkxk,Uid:001785fd-8c63-40ac-bd0e-8041100fbfa9,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:19:56.981263 systemd-networkd[1447]: cali85395c1cebe: Link UP Jan 13 21:19:56.981977 systemd-networkd[1447]: cali85395c1cebe: Gained carrier Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.904 [INFO][4614] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.915 [INFO][4614] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0 calico-kube-controllers-8d9dc5c7- calico-system 86002476-5cad-437c-a040-0e9e7c4b4ce5 779 0 2025-01-13 21:19:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8d9dc5c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8d9dc5c7-xp85p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali85395c1cebe [] []}} ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Namespace="calico-system" Pod="calico-kube-controllers-8d9dc5c7-xp85p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.915 [INFO][4614] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Namespace="calico-system" Pod="calico-kube-controllers-8d9dc5c7-xp85p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.946 [INFO][4636] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" HandleID="k8s-pod-network.1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.954 [INFO][4636] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" HandleID="k8s-pod-network.1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003187f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8d9dc5c7-xp85p", "timestamp":"2025-01-13 21:19:56.946814616 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.954 [INFO][4636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.954 [INFO][4636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.954 [INFO][4636] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.955 [INFO][4636] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" host="localhost" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.958 [INFO][4636] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.960 [INFO][4636] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.961 [INFO][4636] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.963 [INFO][4636] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.963 [INFO][4636] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" host="localhost" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.963 [INFO][4636] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45 Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.966 [INFO][4636] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" host="localhost" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.970 [INFO][4636] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" host="localhost" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.970 [INFO][4636] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" host="localhost" Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.970 [INFO][4636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:57.007738 containerd[1541]: 2025-01-13 21:19:56.970 [INFO][4636] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" HandleID="k8s-pod-network.1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:57.011474 containerd[1541]: 2025-01-13 21:19:56.974 [INFO][4614] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Namespace="calico-system" Pod="calico-kube-controllers-8d9dc5c7-xp85p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0", GenerateName:"calico-kube-controllers-8d9dc5c7-", Namespace:"calico-system", SelfLink:"", UID:"86002476-5cad-437c-a040-0e9e7c4b4ce5", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d9dc5c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8d9dc5c7-xp85p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85395c1cebe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:57.011474 containerd[1541]: 2025-01-13 21:19:56.974 [INFO][4614] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Namespace="calico-system" Pod="calico-kube-controllers-8d9dc5c7-xp85p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:57.011474 containerd[1541]: 2025-01-13 21:19:56.974 [INFO][4614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85395c1cebe ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Namespace="calico-system" Pod="calico-kube-controllers-8d9dc5c7-xp85p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:57.011474 containerd[1541]: 2025-01-13 21:19:56.983 [INFO][4614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Namespace="calico-system" Pod="calico-kube-controllers-8d9dc5c7-xp85p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:57.011474 containerd[1541]: 2025-01-13 21:19:56.983 [INFO][4614] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Namespace="calico-system" Pod="calico-kube-controllers-8d9dc5c7-xp85p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0", GenerateName:"calico-kube-controllers-8d9dc5c7-", Namespace:"calico-system", SelfLink:"", UID:"86002476-5cad-437c-a040-0e9e7c4b4ce5", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d9dc5c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45", Pod:"calico-kube-controllers-8d9dc5c7-xp85p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85395c1cebe", MAC:"aa:5e:4c:c1:10:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:57.011474 containerd[1541]: 2025-01-13 21:19:57.005 [INFO][4614] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45" Namespace="calico-system" Pod="calico-kube-controllers-8d9dc5c7-xp85p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:19:57.024628 kubelet[2797]: I0113 21:19:57.024564 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd958b664-7knlx" podStartSLOduration=21.722410179 podStartE2EDuration="24.024551036s" podCreationTimestamp="2025-01-13 21:19:33 +0000 UTC" firstStartedPulling="2025-01-13 21:19:54.378034046 +0000 UTC m=+41.713639814" lastFinishedPulling="2025-01-13 21:19:56.680174898 +0000 UTC m=+44.015780671" observedRunningTime="2025-01-13 21:19:56.996397819 +0000 UTC m=+44.332003596" watchObservedRunningTime="2025-01-13 21:19:57.024551036 +0000 UTC m=+44.360156806" Jan 13 21:19:57.038082 containerd[1541]: time="2025-01-13T21:19:57.038018302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:57.038161 containerd[1541]: time="2025-01-13T21:19:57.038064192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:57.038161 containerd[1541]: time="2025-01-13T21:19:57.038080840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:57.038161 containerd[1541]: time="2025-01-13T21:19:57.038139560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:57.059881 systemd[1]: Started cri-containerd-1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45.scope - libcontainer container 1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45. Jan 13 21:19:57.064031 systemd-networkd[1447]: cali0e0c03cc009: Link UP Jan 13 21:19:57.065711 systemd-networkd[1447]: cali0e0c03cc009: Gained carrier Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:56.937 [INFO][4625] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:56.948 [INFO][4625] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0 calico-apiserver-dd958b664- calico-apiserver 001785fd-8c63-40ac-bd0e-8041100fbfa9 778 0 2025-01-13 21:19:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd958b664 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dd958b664-5xkxk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0e0c03cc009 [] []}} ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-5xkxk" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--5xkxk-" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:56.948 [INFO][4625] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-5xkxk" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.016 [INFO][4645] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" HandleID="k8s-pod-network.9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.027 [INFO][4645] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" HandleID="k8s-pod-network.9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dd958b664-5xkxk", "timestamp":"2025-01-13 21:19:57.016418481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.028 [INFO][4645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.028 [INFO][4645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.028 [INFO][4645] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.029 [INFO][4645] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" host="localhost" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.032 [INFO][4645] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.037 [INFO][4645] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.038 [INFO][4645] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.040 [INFO][4645] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.040 [INFO][4645] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" host="localhost" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.041 [INFO][4645] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467 Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.047 [INFO][4645] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" host="localhost" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.055 [INFO][4645] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" host="localhost" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.055 [INFO][4645] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" host="localhost" Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.055 [INFO][4645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:19:57.080744 containerd[1541]: 2025-01-13 21:19:57.055 [INFO][4645] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" HandleID="k8s-pod-network.9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:57.081464 containerd[1541]: 2025-01-13 21:19:57.058 [INFO][4625] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-5xkxk" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0", GenerateName:"calico-apiserver-dd958b664-", Namespace:"calico-apiserver", SelfLink:"", UID:"001785fd-8c63-40ac-bd0e-8041100fbfa9", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd958b664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dd958b664-5xkxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e0c03cc009", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:57.081464 containerd[1541]: 2025-01-13 21:19:57.058 [INFO][4625] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-5xkxk" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:57.081464 containerd[1541]: 2025-01-13 21:19:57.058 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e0c03cc009 ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-5xkxk" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:57.081464 containerd[1541]: 2025-01-13 21:19:57.067 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-5xkxk" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:57.081464 containerd[1541]: 2025-01-13 21:19:57.069 [INFO][4625] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-5xkxk" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0", GenerateName:"calico-apiserver-dd958b664-", Namespace:"calico-apiserver", SelfLink:"", UID:"001785fd-8c63-40ac-bd0e-8041100fbfa9", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd958b664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467", Pod:"calico-apiserver-dd958b664-5xkxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e0c03cc009", MAC:"3e:c5:98:12:13:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:19:57.081464 containerd[1541]: 2025-01-13 21:19:57.078 [INFO][4625] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467" Namespace="calico-apiserver" Pod="calico-apiserver-dd958b664-5xkxk" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:19:57.089804 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:57.104026 containerd[1541]: time="2025-01-13T21:19:57.103955172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:57.104207 containerd[1541]: time="2025-01-13T21:19:57.104175780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:57.104300 containerd[1541]: time="2025-01-13T21:19:57.104277773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:57.104465 containerd[1541]: time="2025-01-13T21:19:57.104432472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:57.118472 systemd[1]: Started cri-containerd-9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467.scope - libcontainer container 9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467. Jan 13 21:19:57.128780 containerd[1541]: time="2025-01-13T21:19:57.128669759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d9dc5c7-xp85p,Uid:86002476-5cad-437c-a040-0e9e7c4b4ce5,Namespace:calico-system,Attempt:1,} returns sandbox id \"1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45\"" Jan 13 21:19:57.136898 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:57.161698 containerd[1541]: time="2025-01-13T21:19:57.161242007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd958b664-5xkxk,Uid:001785fd-8c63-40ac-bd0e-8041100fbfa9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467\"" Jan 13 21:19:57.163526 containerd[1541]: time="2025-01-13T21:19:57.163511547Z" level=info msg="CreateContainer within sandbox \"9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:19:57.169848 containerd[1541]: time="2025-01-13T21:19:57.169784006Z" level=info msg="CreateContainer within sandbox \"9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c87f1bef22d1ee31e136e7c7895c00cf352888761f08481cc126455485731905\"" Jan 13 21:19:57.170919 containerd[1541]: time="2025-01-13T21:19:57.170254734Z" level=info msg="StartContainer for \"c87f1bef22d1ee31e136e7c7895c00cf352888761f08481cc126455485731905\"" Jan 13 21:19:57.191485 systemd[1]: Started cri-containerd-c87f1bef22d1ee31e136e7c7895c00cf352888761f08481cc126455485731905.scope - libcontainer container c87f1bef22d1ee31e136e7c7895c00cf352888761f08481cc126455485731905. Jan 13 21:19:57.234077 containerd[1541]: time="2025-01-13T21:19:57.234057370Z" level=info msg="StartContainer for \"c87f1bef22d1ee31e136e7c7895c00cf352888761f08481cc126455485731905\" returns successfully" Jan 13 21:19:57.688682 systemd[1]: run-netns-cni\x2df2729f95\x2d4af4\x2dd96c\x2d40f5\x2d1401c8895fd9.mount: Deactivated successfully. Jan 13 21:19:57.995431 kubelet[2797]: I0113 21:19:57.995298 2797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:19:58.011448 kubelet[2797]: I0113 21:19:58.011212 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd958b664-5xkxk" podStartSLOduration=25.011200476 podStartE2EDuration="25.011200476s" podCreationTimestamp="2025-01-13 21:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:57.986333005 +0000 UTC m=+45.321938783" watchObservedRunningTime="2025-01-13 21:19:58.011200476 +0000 UTC m=+45.346806244" Jan 13 21:19:58.047146 containerd[1541]: time="2025-01-13T21:19:58.045824371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:58.047146 containerd[1541]: time="2025-01-13T21:19:58.046187003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:19:58.047146 containerd[1541]: time="2025-01-13T21:19:58.046301451Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:58.048112 containerd[1541]: time="2025-01-13T21:19:58.048099128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:58.048536 containerd[1541]: time="2025-01-13T21:19:58.048523352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.368060114s" Jan 13 21:19:58.048597 containerd[1541]: time="2025-01-13T21:19:58.048588316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:19:58.049812 containerd[1541]: time="2025-01-13T21:19:58.049800352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:19:58.050503 containerd[1541]: time="2025-01-13T21:19:58.050486864Z" level=info msg="CreateContainer within sandbox \"de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:19:58.069431 containerd[1541]: time="2025-01-13T21:19:58.069383077Z" level=info msg="CreateContainer within sandbox \"de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fafcc27a0f711ddc6313d297624ea2b6bc773e316bfcb549a29cb07b630d9d44\"" Jan 13 21:19:58.069797 containerd[1541]: time="2025-01-13T21:19:58.069785474Z" level=info msg="StartContainer for \"fafcc27a0f711ddc6313d297624ea2b6bc773e316bfcb549a29cb07b630d9d44\"" Jan 13 21:19:58.125463 systemd[1]: Started cri-containerd-fafcc27a0f711ddc6313d297624ea2b6bc773e316bfcb549a29cb07b630d9d44.scope - libcontainer container fafcc27a0f711ddc6313d297624ea2b6bc773e316bfcb549a29cb07b630d9d44. Jan 13 21:19:58.161153 containerd[1541]: time="2025-01-13T21:19:58.161128869Z" level=info msg="StartContainer for \"fafcc27a0f711ddc6313d297624ea2b6bc773e316bfcb549a29cb07b630d9d44\" returns successfully" Jan 13 21:19:58.324503 systemd-networkd[1447]: cali0e0c03cc009: Gained IPv6LL Jan 13 21:19:58.687498 systemd[1]: run-containerd-runc-k8s.io-fafcc27a0f711ddc6313d297624ea2b6bc773e316bfcb549a29cb07b630d9d44-runc.Yo0luR.mount: Deactivated successfully. Jan 13 21:19:58.835494 systemd-networkd[1447]: cali85395c1cebe: Gained IPv6LL Jan 13 21:20:00.176428 containerd[1541]: time="2025-01-13T21:20:00.175899242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:20:00.186425 containerd[1541]: time="2025-01-13T21:20:00.186380635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:20:00.189833 containerd[1541]: time="2025-01-13T21:20:00.189799573Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:20:00.191143 containerd[1541]: time="2025-01-13T21:20:00.191100213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:20:00.191790 containerd[1541]: time="2025-01-13T21:20:00.191692225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.14171812s" Jan 13 21:20:00.191790 containerd[1541]: time="2025-01-13T21:20:00.191719400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:20:00.192602 containerd[1541]: time="2025-01-13T21:20:00.192578550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:20:00.208409 containerd[1541]: time="2025-01-13T21:20:00.207721341Z" level=info msg="CreateContainer within sandbox \"1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:20:00.215709 containerd[1541]: time="2025-01-13T21:20:00.215683868Z" level=info msg="CreateContainer within sandbox \"1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"09b87e18538579305d8ea53575b403a7b935b892959a7862b7c9e725906ebc86\"" Jan 13 21:20:00.216822 containerd[1541]: time="2025-01-13T21:20:00.216596164Z" level=info msg="StartContainer for \"09b87e18538579305d8ea53575b403a7b935b892959a7862b7c9e725906ebc86\"" Jan 13 21:20:00.237467 systemd[1]: Started cri-containerd-09b87e18538579305d8ea53575b403a7b935b892959a7862b7c9e725906ebc86.scope - libcontainer container 09b87e18538579305d8ea53575b403a7b935b892959a7862b7c9e725906ebc86. Jan 13 21:20:00.266006 containerd[1541]: time="2025-01-13T21:20:00.265966000Z" level=info msg="StartContainer for \"09b87e18538579305d8ea53575b403a7b935b892959a7862b7c9e725906ebc86\" returns successfully" Jan 13 21:20:00.313195 kubelet[2797]: I0113 21:20:00.313172 2797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:20:00.998226 kubelet[2797]: I0113 21:20:00.997503 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8d9dc5c7-xp85p" podStartSLOduration=24.934838653 podStartE2EDuration="27.997490406s" podCreationTimestamp="2025-01-13 21:19:33 +0000 UTC" firstStartedPulling="2025-01-13 21:19:57.129695767 +0000 UTC m=+44.465301536" lastFinishedPulling="2025-01-13 21:20:00.192347515 +0000 UTC m=+47.527953289" observedRunningTime="2025-01-13 21:20:00.997164213 +0000 UTC m=+48.332769985" watchObservedRunningTime="2025-01-13 21:20:00.997490406 +0000 UTC m=+48.333096179" Jan 13 21:20:01.246405 kernel: bpftool[5024]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:20:01.418035 systemd-networkd[1447]: vxlan.calico: Link UP Jan 13 21:20:01.418040 systemd-networkd[1447]: vxlan.calico: Gained carrier Jan 13 21:20:02.173413 containerd[1541]: time="2025-01-13T21:20:02.173384018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:20:02.173916 containerd[1541]: time="2025-01-13T21:20:02.173869042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:20:02.174210 containerd[1541]: time="2025-01-13T21:20:02.174194002Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:20:02.179996 containerd[1541]: time="2025-01-13T21:20:02.179975004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:20:02.180600 containerd[1541]: time="2025-01-13T21:20:02.180465310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.987865367s" Jan 13 21:20:02.180600 containerd[1541]: time="2025-01-13T21:20:02.180484427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:20:02.182017 containerd[1541]: time="2025-01-13T21:20:02.182001384Z" level=info msg="CreateContainer within sandbox \"de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:20:02.194120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535254207.mount: Deactivated successfully. Jan 13 21:20:02.194977 containerd[1541]: time="2025-01-13T21:20:02.194148834Z" level=info msg="CreateContainer within sandbox \"de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5f9b35ca461ce3232f7e8af932bf52e85b8f4cd2550749c39230333b731e5361\"" Jan 13 21:20:02.194977 containerd[1541]: time="2025-01-13T21:20:02.194470054Z" level=info msg="StartContainer for \"5f9b35ca461ce3232f7e8af932bf52e85b8f4cd2550749c39230333b731e5361\"" Jan 13 21:20:02.230510 systemd[1]: Started cri-containerd-5f9b35ca461ce3232f7e8af932bf52e85b8f4cd2550749c39230333b731e5361.scope - libcontainer container 5f9b35ca461ce3232f7e8af932bf52e85b8f4cd2550749c39230333b731e5361. Jan 13 21:20:02.252746 containerd[1541]: time="2025-01-13T21:20:02.252574027Z" level=info msg="StartContainer for \"5f9b35ca461ce3232f7e8af932bf52e85b8f4cd2550749c39230333b731e5361\" returns successfully" Jan 13 21:20:02.739485 systemd-networkd[1447]: vxlan.calico: Gained IPv6LL Jan 13 21:20:03.356909 kubelet[2797]: I0113 21:20:03.353717 2797 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:20:03.364209 kubelet[2797]: I0113 21:20:03.364155 2797 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:20:10.737521 kubelet[2797]: I0113 21:20:10.737478 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-459cr" podStartSLOduration=31.595202965 podStartE2EDuration="38.737464192s" podCreationTimestamp="2025-01-13 21:19:32 +0000 UTC" firstStartedPulling="2025-01-13 21:19:55.038822623 +0000 UTC m=+42.374428391" lastFinishedPulling="2025-01-13 21:20:02.181083849 +0000 UTC m=+49.516689618" observedRunningTime="2025-01-13 21:20:03.011886483 +0000 UTC m=+50.347492267" watchObservedRunningTime="2025-01-13 21:20:10.737464192 +0000 UTC m=+58.073069965" Jan 13 21:20:12.799962 containerd[1541]: time="2025-01-13T21:20:12.799925478Z" level=info msg="StopPodSandbox for \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\"" Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.119 [WARNING][5237] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0", GenerateName:"calico-apiserver-dd958b664-", Namespace:"calico-apiserver", SelfLink:"", UID:"001785fd-8c63-40ac-bd0e-8041100fbfa9", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd958b664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467", Pod:"calico-apiserver-dd958b664-5xkxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e0c03cc009", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.139 [INFO][5237] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.139 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" iface="eth0" netns="" Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.139 [INFO][5237] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.139 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.206 [INFO][5243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" HandleID="k8s-pod-network.dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.206 [INFO][5243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.206 [INFO][5243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.223 [WARNING][5243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" HandleID="k8s-pod-network.dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.223 [INFO][5243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" HandleID="k8s-pod-network.dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.224 [INFO][5243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:13.227532 containerd[1541]: 2025-01-13 21:20:13.226 [INFO][5237] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:20:13.247006 containerd[1541]: time="2025-01-13T21:20:13.227552768Z" level=info msg="TearDown network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\" successfully" Jan 13 21:20:13.247006 containerd[1541]: time="2025-01-13T21:20:13.227568397Z" level=info msg="StopPodSandbox for \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\" returns successfully" Jan 13 21:20:13.259704 containerd[1541]: time="2025-01-13T21:20:13.259493810Z" level=info msg="RemovePodSandbox for \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\"" Jan 13 21:20:13.259704 containerd[1541]: time="2025-01-13T21:20:13.259531025Z" level=info msg="Forcibly stopping sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\"" Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.308 [WARNING][5261] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0", GenerateName:"calico-apiserver-dd958b664-", Namespace:"calico-apiserver", SelfLink:"", UID:"001785fd-8c63-40ac-bd0e-8041100fbfa9", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd958b664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ae7e6e9e513598b754c1f8c423ef4b4b88866cc604fb70ac6572f61b98fd467", Pod:"calico-apiserver-dd958b664-5xkxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e0c03cc009", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.308 [INFO][5261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.308 [INFO][5261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" iface="eth0" netns="" Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.308 [INFO][5261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.308 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.333 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" HandleID="k8s-pod-network.dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.333 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.333 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.342 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" HandleID="k8s-pod-network.dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.342 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" HandleID="k8s-pod-network.dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Workload="localhost-k8s-calico--apiserver--dd958b664--5xkxk-eth0" Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.343 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:13.345427 containerd[1541]: 2025-01-13 21:20:13.344 [INFO][5261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad" Jan 13 21:20:13.348612 containerd[1541]: time="2025-01-13T21:20:13.345838475Z" level=info msg="TearDown network for sandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\" successfully" Jan 13 21:20:13.376686 containerd[1541]: time="2025-01-13T21:20:13.376644330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:20:13.410640 containerd[1541]: time="2025-01-13T21:20:13.410603912Z" level=info msg="RemovePodSandbox \"dd12e12effad5cc1c3e453006956cebfba8b11f9e887ab69bb8ca86ac70f08ad\" returns successfully" Jan 13 21:20:13.416895 containerd[1541]: time="2025-01-13T21:20:13.416713157Z" level=info msg="StopPodSandbox for \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\"" Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.449 [WARNING][5286] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c9b19419-34c4-46dc-abe6-e6b8c9ad67ed", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab", Pod:"coredns-7db6d8ff4d-c5tjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4961996756a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.449 [INFO][5286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.449 [INFO][5286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" iface="eth0" netns="" Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.449 [INFO][5286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.449 [INFO][5286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.478 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" HandleID="k8s-pod-network.fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.478 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.478 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.490 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" HandleID="k8s-pod-network.fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.490 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" HandleID="k8s-pod-network.fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.495 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:13.498234 containerd[1541]: 2025-01-13 21:20:13.496 [INFO][5286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:20:13.513706 containerd[1541]: time="2025-01-13T21:20:13.499237404Z" level=info msg="TearDown network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\" successfully" Jan 13 21:20:13.513706 containerd[1541]: time="2025-01-13T21:20:13.499253728Z" level=info msg="StopPodSandbox for \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\" returns successfully" Jan 13 21:20:13.513706 containerd[1541]: time="2025-01-13T21:20:13.499642999Z" level=info msg="RemovePodSandbox for \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\"" Jan 13 21:20:13.513706 containerd[1541]: time="2025-01-13T21:20:13.499684302Z" level=info msg="Forcibly stopping sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\"" Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.531 [WARNING][5310] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c9b19419-34c4-46dc-abe6-e6b8c9ad67ed", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e12fd33b3088f5f740f18be5b35d79f3bc8255fa9db6a38f97cdb026f9f05cab", Pod:"coredns-7db6d8ff4d-c5tjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4961996756a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.531 [INFO][5310] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.531 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" iface="eth0" netns="" Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.531 [INFO][5310] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.531 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.551 [INFO][5316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" HandleID="k8s-pod-network.fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.551 [INFO][5316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.551 [INFO][5316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.557 [WARNING][5316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" HandleID="k8s-pod-network.fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.557 [INFO][5316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" HandleID="k8s-pod-network.fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Workload="localhost-k8s-coredns--7db6d8ff4d--c5tjn-eth0" Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.558 [INFO][5316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:13.560164 containerd[1541]: 2025-01-13 21:20:13.559 [INFO][5310] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d" Jan 13 21:20:13.560164 containerd[1541]: time="2025-01-13T21:20:13.560084672Z" level=info msg="TearDown network for sandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\" successfully" Jan 13 21:20:13.619861 containerd[1541]: time="2025-01-13T21:20:13.619824340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:20:13.638817 containerd[1541]: time="2025-01-13T21:20:13.619880020Z" level=info msg="RemovePodSandbox \"fc014979f9a24e2990d266455ef435a70204baf52c380accafac9475a0b5f77d\" returns successfully" Jan 13 21:20:13.638817 containerd[1541]: time="2025-01-13T21:20:13.620247058Z" level=info msg="StopPodSandbox for \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\"" Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.671 [WARNING][5335] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0", GenerateName:"calico-apiserver-dd958b664-", Namespace:"calico-apiserver", SelfLink:"", UID:"4094d46d-3bd9-4482-9e42-9463597eb69a", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd958b664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7", Pod:"calico-apiserver-dd958b664-7knlx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c1649e9d4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.671 [INFO][5335] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.671 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" iface="eth0" netns="" Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.672 [INFO][5335] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.672 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.704 [INFO][5341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" HandleID="k8s-pod-network.b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.704 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.704 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.708 [WARNING][5341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" HandleID="k8s-pod-network.b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.708 [INFO][5341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" HandleID="k8s-pod-network.b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.709 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:13.712897 containerd[1541]: 2025-01-13 21:20:13.711 [INFO][5335] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:20:13.712897 containerd[1541]: time="2025-01-13T21:20:13.712923996Z" level=info msg="TearDown network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\" successfully" Jan 13 21:20:13.712897 containerd[1541]: time="2025-01-13T21:20:13.712956466Z" level=info msg="StopPodSandbox for \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\" returns successfully" Jan 13 21:20:13.723001 containerd[1541]: time="2025-01-13T21:20:13.713797169Z" level=info msg="RemovePodSandbox for \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\"" Jan 13 21:20:13.723001 containerd[1541]: time="2025-01-13T21:20:13.713813112Z" level=info msg="Forcibly stopping sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\"" Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.753 [WARNING][5359] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0", GenerateName:"calico-apiserver-dd958b664-", Namespace:"calico-apiserver", SelfLink:"", UID:"4094d46d-3bd9-4482-9e42-9463597eb69a", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd958b664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9d8a617ab292a7d893afa6e034573c03e9ae1e03e357d5a484aef4401ad9db7", Pod:"calico-apiserver-dd958b664-7knlx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c1649e9d4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.753 [INFO][5359] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.753 [INFO][5359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" iface="eth0" netns="" Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.753 [INFO][5359] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.753 [INFO][5359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.775 [INFO][5366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" HandleID="k8s-pod-network.b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.776 [INFO][5366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.776 [INFO][5366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.780 [WARNING][5366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" HandleID="k8s-pod-network.b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.780 [INFO][5366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" HandleID="k8s-pod-network.b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Workload="localhost-k8s-calico--apiserver--dd958b664--7knlx-eth0" Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.782 [INFO][5366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:13.785654 containerd[1541]: 2025-01-13 21:20:13.783 [INFO][5359] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd" Jan 13 21:20:13.786627 containerd[1541]: time="2025-01-13T21:20:13.785633214Z" level=info msg="TearDown network for sandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\" successfully" Jan 13 21:20:13.789443 containerd[1541]: time="2025-01-13T21:20:13.789140788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:20:13.789443 containerd[1541]: time="2025-01-13T21:20:13.789274590Z" level=info msg="RemovePodSandbox \"b7509b6f2250d1cc73ce44b174f5dde0e8a8f5c6e64f4cebf79d578883f8aecd\" returns successfully" Jan 13 21:20:13.789961 containerd[1541]: time="2025-01-13T21:20:13.789945169Z" level=info msg="StopPodSandbox for \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\"" Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.821 [WARNING][5384] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0", GenerateName:"calico-kube-controllers-8d9dc5c7-", Namespace:"calico-system", SelfLink:"", UID:"86002476-5cad-437c-a040-0e9e7c4b4ce5", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d9dc5c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45", Pod:"calico-kube-controllers-8d9dc5c7-xp85p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85395c1cebe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.821 [INFO][5384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.821 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" iface="eth0" netns="" Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.821 [INFO][5384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.821 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.840 [INFO][5390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" HandleID="k8s-pod-network.022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.840 [INFO][5390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.840 [INFO][5390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.844 [WARNING][5390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" HandleID="k8s-pod-network.022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.844 [INFO][5390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" HandleID="k8s-pod-network.022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.845 [INFO][5390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:13.848535 containerd[1541]: 2025-01-13 21:20:13.847 [INFO][5384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:20:13.848535 containerd[1541]: time="2025-01-13T21:20:13.848446367Z" level=info msg="TearDown network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\" successfully" Jan 13 21:20:13.848535 containerd[1541]: time="2025-01-13T21:20:13.848463572Z" level=info msg="StopPodSandbox for \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\" returns successfully" Jan 13 21:20:13.850568 containerd[1541]: time="2025-01-13T21:20:13.848939150Z" level=info msg="RemovePodSandbox for \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\"" Jan 13 21:20:13.850568 containerd[1541]: time="2025-01-13T21:20:13.848957302Z" level=info msg="Forcibly stopping sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\"" Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.882 [WARNING][5408] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0", GenerateName:"calico-kube-controllers-8d9dc5c7-", Namespace:"calico-system", SelfLink:"", UID:"86002476-5cad-437c-a040-0e9e7c4b4ce5", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d9dc5c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1da129cca78898fea838d2a6f2280267fd4bf0ff30fcba1e9477a39a50a2ea45", Pod:"calico-kube-controllers-8d9dc5c7-xp85p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85395c1cebe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.883 [INFO][5408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.883 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" iface="eth0" netns="" Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.883 [INFO][5408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.883 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.901 [INFO][5414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" HandleID="k8s-pod-network.022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.901 [INFO][5414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.901 [INFO][5414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.906 [WARNING][5414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" HandleID="k8s-pod-network.022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.906 [INFO][5414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" HandleID="k8s-pod-network.022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Workload="localhost-k8s-calico--kube--controllers--8d9dc5c7--xp85p-eth0" Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.907 [INFO][5414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:13.911426 containerd[1541]: 2025-01-13 21:20:13.909 [INFO][5408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15" Jan 13 21:20:13.911426 containerd[1541]: time="2025-01-13T21:20:13.911131295Z" level=info msg="TearDown network for sandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\" successfully" Jan 13 21:20:13.915843 containerd[1541]: time="2025-01-13T21:20:13.915701312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:20:13.915843 containerd[1541]: time="2025-01-13T21:20:13.915754234Z" level=info msg="RemovePodSandbox \"022705e4f328b76d74d4289714e513ffb9806378d414d48fa36421c744a9de15\" returns successfully" Jan 13 21:20:13.916409 containerd[1541]: time="2025-01-13T21:20:13.916391797Z" level=info msg="StopPodSandbox for \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\"" Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.942 [WARNING][5433] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6d09daa5-3680-44fd-88d8-a2a333e97e31", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb", Pod:"coredns-7db6d8ff4d-mc7ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63cae819164", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.942 [INFO][5433] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.942 [INFO][5433] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" iface="eth0" netns="" Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.942 [INFO][5433] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.942 [INFO][5433] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.962 [INFO][5439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" HandleID="k8s-pod-network.76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.962 [INFO][5439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.962 [INFO][5439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.965 [WARNING][5439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" HandleID="k8s-pod-network.76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.965 [INFO][5439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" HandleID="k8s-pod-network.76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.967 [INFO][5439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:13.970456 containerd[1541]: 2025-01-13 21:20:13.968 [INFO][5433] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:20:13.973047 containerd[1541]: time="2025-01-13T21:20:13.970560614Z" level=info msg="TearDown network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\" successfully" Jan 13 21:20:13.973047 containerd[1541]: time="2025-01-13T21:20:13.970578148Z" level=info msg="StopPodSandbox for \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\" returns successfully" Jan 13 21:20:13.973047 containerd[1541]: time="2025-01-13T21:20:13.970876391Z" level=info msg="RemovePodSandbox for \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\"" Jan 13 21:20:13.973047 containerd[1541]: time="2025-01-13T21:20:13.970892005Z" level=info msg="Forcibly stopping sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\"" Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.008 [WARNING][5457] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6d09daa5-3680-44fd-88d8-a2a333e97e31", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"768cbf86f3a50cc22c3ea70cfe3dcfbd9c1cb07c1d796b7b78c32881d45d93bb", Pod:"coredns-7db6d8ff4d-mc7ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63cae819164", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.008 [INFO][5457] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.008 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" iface="eth0" netns="" Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.008 [INFO][5457] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.008 [INFO][5457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.024 [INFO][5463] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" HandleID="k8s-pod-network.76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.024 [INFO][5463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.024 [INFO][5463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.028 [WARNING][5463] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" HandleID="k8s-pod-network.76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.028 [INFO][5463] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" HandleID="k8s-pod-network.76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Workload="localhost-k8s-coredns--7db6d8ff4d--mc7ng-eth0" Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.029 [INFO][5463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:14.031279 containerd[1541]: 2025-01-13 21:20:14.030 [INFO][5457] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31" Jan 13 21:20:14.032329 containerd[1541]: time="2025-01-13T21:20:14.031821119Z" level=info msg="TearDown network for sandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\" successfully" Jan 13 21:20:14.033104 containerd[1541]: time="2025-01-13T21:20:14.033084032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:20:14.033167 containerd[1541]: time="2025-01-13T21:20:14.033120634Z" level=info msg="RemovePodSandbox \"76eddfd312b7a750bdcf0097e5f80eb96c69f4da24fe4ea8b91b3b06f14c6a31\" returns successfully" Jan 13 21:20:14.033576 containerd[1541]: time="2025-01-13T21:20:14.033428218Z" level=info msg="StopPodSandbox for \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\"" Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.063 [WARNING][5481] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--459cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67967985-cb3a-4c06-87e4-f05e417d0670", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a", Pod:"csi-node-driver-459cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia91cda7ee6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.064 [INFO][5481] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.064 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" iface="eth0" netns="" Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.064 [INFO][5481] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.064 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.082 [INFO][5487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" HandleID="k8s-pod-network.c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.082 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.082 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.086 [WARNING][5487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" HandleID="k8s-pod-network.c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.086 [INFO][5487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" HandleID="k8s-pod-network.c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.086 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:14.088829 containerd[1541]: 2025-01-13 21:20:14.087 [INFO][5481] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:20:14.089230 containerd[1541]: time="2025-01-13T21:20:14.088854451Z" level=info msg="TearDown network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\" successfully" Jan 13 21:20:14.089230 containerd[1541]: time="2025-01-13T21:20:14.088870562Z" level=info msg="StopPodSandbox for \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\" returns successfully" Jan 13 21:20:14.092889 containerd[1541]: time="2025-01-13T21:20:14.092817301Z" level=info msg="RemovePodSandbox for \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\"" Jan 13 21:20:14.092889 containerd[1541]: time="2025-01-13T21:20:14.092839209Z" level=info msg="Forcibly stopping sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\"" Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.121 [WARNING][5505] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--459cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67967985-cb3a-4c06-87e4-f05e417d0670", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de738746b7b4ecfd103e620dd6b160633519cf50d79eb59f4ea7d5d8c408461a", Pod:"csi-node-driver-459cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia91cda7ee6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.121 [INFO][5505] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.121 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" iface="eth0" netns="" Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.121 [INFO][5505] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.121 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.135 [INFO][5511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" HandleID="k8s-pod-network.c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.135 [INFO][5511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.135 [INFO][5511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.138 [WARNING][5511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" HandleID="k8s-pod-network.c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.138 [INFO][5511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" HandleID="k8s-pod-network.c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Workload="localhost-k8s-csi--node--driver--459cr-eth0" Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.139 [INFO][5511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:20:14.141481 containerd[1541]: 2025-01-13 21:20:14.140 [INFO][5505] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8" Jan 13 21:20:14.141481 containerd[1541]: time="2025-01-13T21:20:14.141445687Z" level=info msg="TearDown network for sandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\" successfully" Jan 13 21:20:14.145743 containerd[1541]: time="2025-01-13T21:20:14.145724855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:20:14.145790 containerd[1541]: time="2025-01-13T21:20:14.145760831Z" level=info msg="RemovePodSandbox \"c8bbdbb7cf2dc345c66ddb2f90c83d9d7b7e87fc7d949c0ec7ee9157e8adb3f8\" returns successfully" Jan 13 21:20:21.484310 systemd[1]: run-containerd-runc-k8s.io-09b87e18538579305d8ea53575b403a7b935b892959a7862b7c9e725906ebc86-runc.qU8U40.mount: Deactivated successfully. Jan 13 21:20:30.283557 systemd[1]: Started sshd@7-139.178.70.104:22-139.178.68.195:44582.service - OpenSSH per-connection server daemon (139.178.68.195:44582). Jan 13 21:20:30.436354 sshd[5573]: Accepted publickey for core from 139.178.68.195 port 44582 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:20:30.443062 sshd[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:30.450554 systemd-logind[1522]: New session 10 of user core. Jan 13 21:20:30.455448 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:20:31.287550 sshd[5573]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:31.289604 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:20:31.289769 systemd[1]: sshd@7-139.178.70.104:22-139.178.68.195:44582.service: Deactivated successfully. Jan 13 21:20:31.291661 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:20:31.292932 systemd-logind[1522]: Removed session 10. Jan 13 21:20:36.295442 systemd[1]: Started sshd@8-139.178.70.104:22-139.178.68.195:48468.service - OpenSSH per-connection server daemon (139.178.68.195:48468). Jan 13 21:20:36.650131 sshd[5595]: Accepted publickey for core from 139.178.68.195 port 48468 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:20:36.651485 sshd[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:36.655210 systemd-logind[1522]: New session 11 of user core. Jan 13 21:20:36.658517 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:20:37.139696 sshd[5595]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:37.142022 systemd[1]: sshd@8-139.178.70.104:22-139.178.68.195:48468.service: Deactivated successfully. Jan 13 21:20:37.143720 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:20:37.144719 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:20:37.145261 systemd-logind[1522]: Removed session 11. Jan 13 21:20:40.169898 kubelet[2797]: I0113 21:20:40.169855 2797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:20:42.149740 systemd[1]: Started sshd@9-139.178.70.104:22-139.178.68.195:48484.service - OpenSSH per-connection server daemon (139.178.68.195:48484). Jan 13 21:20:42.934024 sshd[5648]: Accepted publickey for core from 139.178.68.195 port 48484 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:20:42.936581 sshd[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:42.943626 systemd-logind[1522]: New session 12 of user core. Jan 13 21:20:42.952483 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:20:43.458625 sshd[5648]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:43.474048 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:20:43.474406 systemd[1]: sshd@9-139.178.70.104:22-139.178.68.195:48484.service: Deactivated successfully. Jan 13 21:20:43.475575 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:20:43.476400 systemd-logind[1522]: Removed session 12. Jan 13 21:20:48.462494 systemd[1]: Started sshd@10-139.178.70.104:22-139.178.68.195:39098.service - OpenSSH per-connection server daemon (139.178.68.195:39098). Jan 13 21:20:48.538952 sshd[5681]: Accepted publickey for core from 139.178.68.195 port 39098 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:20:48.540051 sshd[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:48.543402 systemd-logind[1522]: New session 13 of user core. Jan 13 21:20:48.546610 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:20:48.722422 sshd[5681]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:48.729782 systemd[1]: sshd@10-139.178.70.104:22-139.178.68.195:39098.service: Deactivated successfully. Jan 13 21:20:48.734743 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:20:48.737668 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:20:48.745815 systemd[1]: Started sshd@11-139.178.70.104:22-139.178.68.195:39100.service - OpenSSH per-connection server daemon (139.178.68.195:39100). Jan 13 21:20:48.749900 systemd-logind[1522]: Removed session 13. Jan 13 21:20:48.796811 sshd[5695]: Accepted publickey for core from 139.178.68.195 port 39100 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:20:48.797633 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:48.801567 systemd-logind[1522]: New session 14 of user core. Jan 13 21:20:48.806556 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:20:48.989403 sshd[5695]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:48.995098 systemd[1]: sshd@11-139.178.70.104:22-139.178.68.195:39100.service: Deactivated successfully. Jan 13 21:20:48.996570 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:20:48.997425 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:20:49.001736 systemd[1]: Started sshd@12-139.178.70.104:22-139.178.68.195:39104.service - OpenSSH per-connection server daemon (139.178.68.195:39104). Jan 13 21:20:49.002416 systemd-logind[1522]: Removed session 14. Jan 13 21:20:49.059233 sshd[5706]: Accepted publickey for core from 139.178.68.195 port 39104 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:20:49.060270 sshd[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:49.064787 systemd-logind[1522]: New session 15 of user core. Jan 13 21:20:49.070482 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:20:49.218106 sshd[5706]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:49.220260 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:20:49.223314 systemd[1]: sshd@12-139.178.70.104:22-139.178.68.195:39104.service: Deactivated successfully. Jan 13 21:20:49.226028 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:20:49.226875 systemd-logind[1522]: Removed session 15. Jan 13 21:20:54.228082 systemd[1]: Started sshd@13-139.178.70.104:22-139.178.68.195:39112.service - OpenSSH per-connection server daemon (139.178.68.195:39112). Jan 13 21:20:54.295080 sshd[5723]: Accepted publickey for core from 139.178.68.195 port 39112 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:20:54.295999 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:54.298727 systemd-logind[1522]: New session 16 of user core. Jan 13 21:20:54.302460 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:20:54.459443 sshd[5723]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:54.467449 systemd[1]: sshd@13-139.178.70.104:22-139.178.68.195:39112.service: Deactivated successfully. Jan 13 21:20:54.469194 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:20:54.469222 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:20:54.470075 systemd-logind[1522]: Removed session 16. Jan 13 21:20:59.464472 systemd[1]: Started sshd@14-139.178.70.104:22-139.178.68.195:39924.service - OpenSSH per-connection server daemon (139.178.68.195:39924). Jan 13 21:20:59.520988 sshd[5737]: Accepted publickey for core from 139.178.68.195 port 39924 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:20:59.522083 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:59.524952 systemd-logind[1522]: New session 17 of user core. Jan 13 21:20:59.532656 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:21:00.034529 sshd[5737]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:00.036540 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:21:00.036716 systemd[1]: sshd@14-139.178.70.104:22-139.178.68.195:39924.service: Deactivated successfully. Jan 13 21:21:00.037964 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:21:00.039313 systemd-logind[1522]: Removed session 17. Jan 13 21:21:05.045073 systemd[1]: Started sshd@15-139.178.70.104:22-139.178.68.195:48890.service - OpenSSH per-connection server daemon (139.178.68.195:48890). Jan 13 21:21:05.124458 sshd[5749]: Accepted publickey for core from 139.178.68.195 port 48890 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:21:05.125408 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:05.128254 systemd-logind[1522]: New session 18 of user core. Jan 13 21:21:05.132502 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:21:05.321688 sshd[5749]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:05.324531 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:21:05.324932 systemd[1]: sshd@15-139.178.70.104:22-139.178.68.195:48890.service: Deactivated successfully. Jan 13 21:21:05.326976 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:21:05.327878 systemd-logind[1522]: Removed session 18. Jan 13 21:21:10.329235 systemd[1]: Started sshd@16-139.178.70.104:22-139.178.68.195:48906.service - OpenSSH per-connection server daemon (139.178.68.195:48906). Jan 13 21:21:10.355281 sshd[5764]: Accepted publickey for core from 139.178.68.195 port 48906 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:21:10.356198 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:10.359137 systemd-logind[1522]: New session 19 of user core. Jan 13 21:21:10.366465 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:21:10.456184 sshd[5764]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:10.460985 systemd[1]: sshd@16-139.178.70.104:22-139.178.68.195:48906.service: Deactivated successfully. Jan 13 21:21:10.462000 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:21:10.462814 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:21:10.466564 systemd[1]: Started sshd@17-139.178.70.104:22-139.178.68.195:48920.service - OpenSSH per-connection server daemon (139.178.68.195:48920). Jan 13 21:21:10.467489 systemd-logind[1522]: Removed session 19. Jan 13 21:21:10.488445 sshd[5777]: Accepted publickey for core from 139.178.68.195 port 48920 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:21:10.489430 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:10.492559 systemd-logind[1522]: New session 20 of user core. Jan 13 21:21:10.496446 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:21:10.662279 systemd[1]: run-containerd-runc-k8s.io-a7a9460929ee6341af09e3c7a604c4a9fb596de75a26f96a1ecae5c74bd57f24-runc.KHTH7t.mount: Deactivated successfully. Jan 13 21:21:10.942079 sshd[5777]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:10.945043 systemd[1]: sshd@17-139.178.70.104:22-139.178.68.195:48920.service: Deactivated successfully. Jan 13 21:21:10.946043 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:21:10.947129 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:21:10.950078 systemd[1]: Started sshd@18-139.178.70.104:22-139.178.68.195:48932.service - OpenSSH per-connection server daemon (139.178.68.195:48932). Jan 13 21:21:10.952445 systemd-logind[1522]: Removed session 20. Jan 13 21:21:11.109422 sshd[5810]: Accepted publickey for core from 139.178.68.195 port 48932 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:21:11.110643 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:11.117011 systemd-logind[1522]: New session 21 of user core. Jan 13 21:21:11.126567 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:21:12.637299 sshd[5810]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:12.646222 systemd[1]: Started sshd@19-139.178.70.104:22-139.178.68.195:48938.service - OpenSSH per-connection server daemon (139.178.68.195:48938). Jan 13 21:21:12.653674 systemd[1]: sshd@18-139.178.70.104:22-139.178.68.195:48932.service: Deactivated successfully. Jan 13 21:21:12.654768 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:21:12.658821 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:21:12.660096 systemd-logind[1522]: Removed session 21. Jan 13 21:21:12.729424 sshd[5825]: Accepted publickey for core from 139.178.68.195 port 48938 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:21:12.730520 sshd[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:12.733887 systemd-logind[1522]: New session 22 of user core. Jan 13 21:21:12.738511 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:21:13.402351 sshd[5825]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:13.407983 systemd[1]: sshd@19-139.178.70.104:22-139.178.68.195:48938.service: Deactivated successfully. Jan 13 21:21:13.409217 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:21:13.410288 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:21:13.411242 systemd[1]: Started sshd@20-139.178.70.104:22-139.178.68.195:48946.service - OpenSSH per-connection server daemon (139.178.68.195:48946). Jan 13 21:21:13.412089 systemd-logind[1522]: Removed session 22. Jan 13 21:21:13.524028 sshd[5843]: Accepted publickey for core from 139.178.68.195 port 48946 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:21:13.525146 sshd[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:13.527712 systemd-logind[1522]: New session 23 of user core. Jan 13 21:21:13.535501 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:21:13.663204 sshd[5843]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:13.666222 systemd[1]: sshd@20-139.178.70.104:22-139.178.68.195:48946.service: Deactivated successfully. Jan 13 21:21:13.667308 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:21:13.669066 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:21:13.670100 systemd-logind[1522]: Removed session 23. Jan 13 21:21:18.394965 systemd[1]: run-containerd-runc-k8s.io-09b87e18538579305d8ea53575b403a7b935b892959a7862b7c9e725906ebc86-runc.wwAbee.mount: Deactivated successfully. Jan 13 21:21:18.677674 systemd[1]: Started sshd@21-139.178.70.104:22-139.178.68.195:54292.service - OpenSSH per-connection server daemon (139.178.68.195:54292). Jan 13 21:21:18.739582 sshd[5878]: Accepted publickey for core from 139.178.68.195 port 54292 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:21:18.741320 sshd[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:18.745423 systemd-logind[1522]: New session 24 of user core. Jan 13 21:21:18.751593 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:21:19.295612 sshd[5878]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:19.298575 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:21:19.300132 systemd[1]: sshd@21-139.178.70.104:22-139.178.68.195:54292.service: Deactivated successfully. Jan 13 21:21:19.302475 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:21:19.303346 systemd-logind[1522]: Removed session 24. Jan 13 21:21:24.302829 systemd[1]: Started sshd@22-139.178.70.104:22-139.178.68.195:54308.service - OpenSSH per-connection server daemon (139.178.68.195:54308). Jan 13 21:21:24.353381 sshd[5917]: Accepted publickey for core from 139.178.68.195 port 54308 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:21:24.354318 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:24.357247 systemd-logind[1522]: New session 25 of user core. Jan 13 21:21:24.362451 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:21:24.470479 sshd[5917]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:24.472394 systemd[1]: sshd@22-139.178.70.104:22-139.178.68.195:54308.service: Deactivated successfully. Jan 13 21:21:24.473407 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:21:24.473802 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:21:24.474312 systemd-logind[1522]: Removed session 25. Jan 13 21:21:29.483019 systemd[1]: Started sshd@23-139.178.70.104:22-139.178.68.195:43654.service - OpenSSH per-connection server daemon (139.178.68.195:43654). Jan 13 21:21:29.528619 sshd[5932]: Accepted publickey for core from 139.178.68.195 port 43654 ssh2: RSA SHA256:GSApeBzQxe9eonwpRAj9hDh6Dwail2ty9pGpZ6fo/KQ Jan 13 21:21:29.529811 sshd[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:29.532497 systemd-logind[1522]: New session 26 of user core. Jan 13 21:21:29.537472 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:21:29.664499 sshd[5932]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:29.666564 systemd-logind[1522]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:21:29.666722 systemd[1]: sshd@23-139.178.70.104:22-139.178.68.195:43654.service: Deactivated successfully. Jan 13 21:21:29.668479 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:21:29.670210 systemd-logind[1522]: Removed session 26.