Jan 17 12:06:50.728686 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:06:50.728703 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:06:50.728710 kernel: Disabled fast string operations Jan 17 12:06:50.728714 kernel: BIOS-provided physical RAM map: Jan 17 12:06:50.728718 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 17 12:06:50.728722 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 17 12:06:50.728728 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 17 12:06:50.728732 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 17 12:06:50.728737 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 17 12:06:50.728741 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 17 12:06:50.728745 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 17 12:06:50.728749 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 17 12:06:50.728753 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 17 12:06:50.728757 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 17 12:06:50.728764 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 17 12:06:50.728769 kernel: NX (Execute Disable) protection: active Jan 17 12:06:50.728774 kernel: APIC: Static calls initialized Jan 17 12:06:50.728778 kernel: SMBIOS 2.7 present. Jan 17 12:06:50.728784 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 17 12:06:50.728788 kernel: vmware: hypercall mode: 0x00 Jan 17 12:06:50.728793 kernel: Hypervisor detected: VMware Jan 17 12:06:50.728798 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 17 12:06:50.728804 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 17 12:06:50.728809 kernel: vmware: using clock offset of 3861732559 ns Jan 17 12:06:50.728813 kernel: tsc: Detected 3408.000 MHz processor Jan 17 12:06:50.728819 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:06:50.728824 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:06:50.728829 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 17 12:06:50.728834 kernel: total RAM covered: 3072M Jan 17 12:06:50.728838 kernel: Found optimal setting for mtrr clean up Jan 17 12:06:50.728844 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 17 12:06:50.728850 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 17 12:06:50.728854 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:06:50.728859 kernel: Using GB pages for direct mapping Jan 17 12:06:50.728864 kernel: ACPI: Early table checksum verification disabled Jan 17 12:06:50.728869 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 17 12:06:50.728874 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 17 12:06:50.728879 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 17 12:06:50.728884 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 17 12:06:50.728889 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 17 12:06:50.728896 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 17 12:06:50.728901 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 17 12:06:50.728906 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 17 12:06:50.728912 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 17 12:06:50.728917 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 17 12:06:50.728923 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 17 12:06:50.728928 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 17 12:06:50.728933 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 17 12:06:50.728938 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 17 12:06:50.728943 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 17 12:06:50.728948 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 17 12:06:50.728953 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 17 12:06:50.728958 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 17 12:06:50.728964 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 17 12:06:50.728969 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 17 12:06:50.728975 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 17 12:06:50.728980 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 17 12:06:50.728985 kernel: system APIC only can use physical flat Jan 17 12:06:50.728990 kernel: APIC: Switched APIC routing to: physical flat Jan 17 12:06:50.728995 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:06:50.729000 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 12:06:50.729005 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 12:06:50.729010 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 12:06:50.729015 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 12:06:50.729021 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 12:06:50.729026 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 12:06:50.729031 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 12:06:50.729037 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 17 12:06:50.729041 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 17 12:06:50.729046 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 17 12:06:50.729052 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 17 12:06:50.729056 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 17 12:06:50.729061 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 17 12:06:50.729066 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 17 12:06:50.729071 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 17 12:06:50.729077 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 17 12:06:50.729082 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 17 12:06:50.729087 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 17 12:06:50.729092 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 17 12:06:50.729097 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 17 12:06:50.729102 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 17 12:06:50.729107 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 17 12:06:50.729112 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 17 12:06:50.729117 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 17 12:06:50.729122 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 17 12:06:50.729128 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 17 12:06:50.729133 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 17 12:06:50.729138 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 17 12:06:50.729143 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 17 12:06:50.729148 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 17 12:06:50.729153 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 17 12:06:50.729158 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 17 12:06:50.729163 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 17 12:06:50.729168 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 17 12:06:50.729173 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 17 12:06:50.729179 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 17 12:06:50.729184 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 17 12:06:50.729189 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 17 12:06:50.729194 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 17 12:06:50.729199 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 17 12:06:50.729204 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 17 12:06:50.729209 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 17 12:06:50.729214 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 17 12:06:50.729219 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 17 12:06:50.729224 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 17 12:06:50.729230 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 17 12:06:50.729236 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 17 12:06:50.729240 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 17 12:06:50.729245 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 17 12:06:50.729251 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 17 12:06:50.729256 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 17 12:06:50.729260 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 17 12:06:50.729265 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 17 12:06:50.729271 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 17 12:06:50.729275 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 17 12:06:50.729282 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 17 12:06:50.729287 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 17 12:06:50.729292 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 17 12:06:50.729301 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 17 12:06:50.729307 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 17 12:06:50.729313 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 17 12:06:50.729318 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 17 12:06:50.729323 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 17 12:06:50.729329 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 17 12:06:50.729335 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 17 12:06:50.729341 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 17 12:06:50.729346 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 17 12:06:50.729351 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 17 12:06:50.729357 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 17 12:06:50.729369 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 17 12:06:50.729376 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 17 12:06:50.729382 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 17 12:06:50.729387 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 17 12:06:50.729392 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 17 12:06:50.729400 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 17 12:06:50.729405 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 17 12:06:50.729410 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 17 12:06:50.729416 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 17 12:06:50.729421 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 17 12:06:50.729426 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 17 12:06:50.729432 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 17 12:06:50.729437 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 17 12:06:50.729442 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 17 12:06:50.729448 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 17 12:06:50.729454 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 17 12:06:50.729460 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 17 12:06:50.729465 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 17 12:06:50.729470 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 17 12:06:50.729476 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 17 12:06:50.729481 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 17 12:06:50.729486 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 17 12:06:50.729492 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 17 12:06:50.729497 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 17 12:06:50.729502 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 17 12:06:50.729507 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 17 12:06:50.729514 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 17 12:06:50.729519 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 17 12:06:50.729525 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 17 12:06:50.729530 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 17 12:06:50.729535 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 17 12:06:50.729541 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 17 12:06:50.729546 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 17 12:06:50.729552 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 17 12:06:50.729557 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 17 12:06:50.729562 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 17 12:06:50.729569 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 17 12:06:50.729574 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 17 12:06:50.729579 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 17 12:06:50.729584 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 17 12:06:50.729590 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 17 12:06:50.729595 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 17 12:06:50.729601 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 17 12:06:50.729606 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 17 12:06:50.729611 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 17 12:06:50.729617 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 17 12:06:50.729623 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 17 12:06:50.729629 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 17 12:06:50.729634 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 17 12:06:50.729640 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 17 12:06:50.729645 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 17 12:06:50.729651 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 17 12:06:50.729656 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 17 12:06:50.729661 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 17 12:06:50.729667 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 17 12:06:50.729672 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 17 12:06:50.729679 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 17 12:06:50.729684 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 17 12:06:50.729689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:06:50.729695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:06:50.729701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 17 12:06:50.729706 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 17 12:06:50.729712 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 17 12:06:50.729717 kernel: Zone ranges: Jan 17 12:06:50.729723 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:06:50.729728 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 17 12:06:50.729735 kernel: Normal empty Jan 17 12:06:50.729741 kernel: Movable zone start for each node Jan 17 12:06:50.729746 kernel: Early memory node ranges Jan 17 12:06:50.729751 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 17 12:06:50.729757 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 17 12:06:50.729762 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 17 12:06:50.729768 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 17 12:06:50.729773 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:06:50.729779 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 17 12:06:50.729785 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 17 12:06:50.729791 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 17 12:06:50.729796 kernel: system APIC only can use physical flat Jan 17 12:06:50.729802 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 17 12:06:50.729807 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 17 12:06:50.729813 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 17 12:06:50.729818 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 17 12:06:50.729824 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 17 12:06:50.729829 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 17 12:06:50.729834 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 17 12:06:50.729841 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 17 12:06:50.729846 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 17 12:06:50.729851 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 17 12:06:50.729857 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 17 12:06:50.729862 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 17 12:06:50.729868 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 17 12:06:50.729873 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 17 12:06:50.729878 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 17 12:06:50.729884 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 17 12:06:50.729890 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 17 12:06:50.729896 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 17 12:06:50.729901 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 17 12:06:50.729907 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 17 12:06:50.729912 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 17 12:06:50.729918 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 17 12:06:50.729923 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 17 12:06:50.729928 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 17 12:06:50.729934 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 17 12:06:50.729939 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 17 12:06:50.729946 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 17 12:06:50.729951 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 17 12:06:50.729956 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 17 12:06:50.729962 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 17 12:06:50.729967 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 17 12:06:50.729973 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 17 12:06:50.729978 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 17 12:06:50.729983 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 17 12:06:50.729989 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 17 12:06:50.729994 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 17 12:06:50.730001 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 17 12:06:50.730006 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 17 12:06:50.730011 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 17 12:06:50.730017 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 17 12:06:50.730022 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 17 12:06:50.730027 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 17 12:06:50.730033 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 17 12:06:50.730038 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 17 12:06:50.730043 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 17 12:06:50.730049 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 17 12:06:50.730055 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 17 12:06:50.730060 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 17 12:06:50.730066 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 17 12:06:50.730071 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 17 12:06:50.730077 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 17 12:06:50.730082 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 17 12:06:50.730088 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 17 12:06:50.730093 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 17 12:06:50.730098 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 17 12:06:50.730105 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 17 12:06:50.730111 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 17 12:06:50.730116 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 17 12:06:50.730121 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 17 12:06:50.730127 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 17 12:06:50.730132 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 17 12:06:50.730137 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 17 12:06:50.730143 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 17 12:06:50.730148 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 17 12:06:50.730153 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 17 12:06:50.730160 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 17 12:06:50.730165 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 17 12:06:50.730171 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 17 12:06:50.730176 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 17 12:06:50.730182 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 17 12:06:50.730187 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 17 12:06:50.730192 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 17 12:06:50.730198 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 17 12:06:50.730203 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 17 12:06:50.730209 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 17 12:06:50.730215 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 17 12:06:50.730220 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 17 12:06:50.730225 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 17 12:06:50.730231 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 17 12:06:50.730236 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 17 12:06:50.730241 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 17 12:06:50.730247 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 17 12:06:50.730252 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 17 12:06:50.730257 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 17 12:06:50.730264 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 17 12:06:50.730269 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 17 12:06:50.730275 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 17 12:06:50.730280 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 17 12:06:50.730286 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 17 12:06:50.730291 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 17 12:06:50.730296 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 17 12:06:50.730302 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 17 12:06:50.730307 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 17 12:06:50.730313 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 17 12:06:50.730319 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 17 12:06:50.730325 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 17 12:06:50.730330 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 17 12:06:50.730336 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 17 12:06:50.730341 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 17 12:06:50.730347 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 17 12:06:50.730352 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 17 12:06:50.730358 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 17 12:06:50.730371 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 17 12:06:50.730379 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 17 12:06:50.730384 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 17 12:06:50.730390 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 17 12:06:50.730395 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 17 12:06:50.730401 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 17 12:06:50.730406 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 17 12:06:50.730411 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 17 12:06:50.730417 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 17 12:06:50.730422 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 17 12:06:50.730428 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 17 12:06:50.730434 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 17 12:06:50.730440 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 17 12:06:50.730445 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 17 12:06:50.730451 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 17 12:06:50.730456 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 17 12:06:50.730461 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 17 12:06:50.730467 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 17 12:06:50.730472 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 17 12:06:50.730477 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 17 12:06:50.730483 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 17 12:06:50.730490 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 17 12:06:50.730495 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 17 12:06:50.730500 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 17 12:06:50.730506 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 17 12:06:50.730517 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 17 12:06:50.730523 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:06:50.730529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 17 12:06:50.730534 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:06:50.730540 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 17 12:06:50.730547 kernel: TSC deadline timer available Jan 17 12:06:50.730553 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 17 12:06:50.730558 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 17 12:06:50.730564 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 17 12:06:50.730569 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:06:50.730575 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 17 12:06:50.730580 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 17 12:06:50.730586 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 17 12:06:50.730591 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 17 12:06:50.730598 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 17 12:06:50.730603 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 17 12:06:50.730609 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 17 12:06:50.730614 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 17 12:06:50.730629 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 17 12:06:50.730635 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 17 12:06:50.730641 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 17 12:06:50.730648 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 17 12:06:50.730654 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 17 12:06:50.730661 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 17 12:06:50.730666 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 17 12:06:50.730672 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 17 12:06:50.730678 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 17 12:06:50.730684 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 17 12:06:50.730689 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 17 12:06:50.730696 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:06:50.730702 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:06:50.730708 kernel: random: crng init done Jan 17 12:06:50.730714 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 17 12:06:50.730720 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 17 12:06:50.730726 kernel: printk: log_buf_len min size: 262144 bytes Jan 17 12:06:50.730732 kernel: printk: log_buf_len: 1048576 bytes Jan 17 12:06:50.730737 kernel: printk: early log buf free: 239648(91%) Jan 17 12:06:50.730743 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:06:50.730749 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:06:50.730755 kernel: Fallback order for Node 0: 0 Jan 17 12:06:50.730763 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 17 12:06:50.730768 kernel: Policy zone: DMA32 Jan 17 12:06:50.730774 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:06:50.730780 kernel: Memory: 1936348K/2096628K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 160020K reserved, 0K cma-reserved) Jan 17 12:06:50.730787 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 17 12:06:50.730794 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:06:50.730800 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:06:50.730806 kernel: Dynamic Preempt: voluntary Jan 17 12:06:50.730812 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:06:50.730818 kernel: rcu: RCU event tracing is enabled. Jan 17 12:06:50.730824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 17 12:06:50.730830 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:06:50.730836 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:06:50.730841 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:06:50.730847 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:06:50.730854 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 17 12:06:50.730860 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 17 12:06:50.730866 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 17 12:06:50.730872 kernel: Console: colour VGA+ 80x25 Jan 17 12:06:50.730878 kernel: printk: console [tty0] enabled Jan 17 12:06:50.730883 kernel: printk: console [ttyS0] enabled Jan 17 12:06:50.730889 kernel: ACPI: Core revision 20230628 Jan 17 12:06:50.730895 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 17 12:06:50.730901 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:06:50.730908 kernel: x2apic enabled Jan 17 12:06:50.730914 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:06:50.730920 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:06:50.730927 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 17 12:06:50.730933 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 17 12:06:50.730938 kernel: Disabled fast string operations Jan 17 12:06:50.730944 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:06:50.730950 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:06:50.730956 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:06:50.730964 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:06:50.730969 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:06:50.730975 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 17 12:06:50.730981 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:06:50.730987 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 17 12:06:50.730993 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 17 12:06:50.730999 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:06:50.731005 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:06:50.731011 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:06:50.731018 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 12:06:50.731024 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 12:06:50.731030 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:06:50.731036 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:06:50.731042 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:06:50.731048 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:06:50.731055 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:06:50.731061 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:06:50.731067 kernel: pid_max: default: 131072 minimum: 1024 Jan 17 12:06:50.731075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:06:50.731081 kernel: landlock: Up and running. Jan 17 12:06:50.731087 kernel: SELinux: Initializing. Jan 17 12:06:50.731092 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:06:50.731099 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:06:50.731104 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 17 12:06:50.731111 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:06:50.731117 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:06:50.731124 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:06:50.731131 kernel: Performance Events: Skylake events, core PMU driver. Jan 17 12:06:50.731136 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 17 12:06:50.731142 kernel: core: CPUID marked event: 'instructions' unavailable Jan 17 12:06:50.731148 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 17 12:06:50.731154 kernel: core: CPUID marked event: 'cache references' unavailable Jan 17 12:06:50.731160 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 17 12:06:50.731165 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 17 12:06:50.731171 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 17 12:06:50.731178 kernel: ... version: 1 Jan 17 12:06:50.731184 kernel: ... bit width: 48 Jan 17 12:06:50.731190 kernel: ... generic registers: 4 Jan 17 12:06:50.731196 kernel: ... value mask: 0000ffffffffffff Jan 17 12:06:50.731202 kernel: ... max period: 000000007fffffff Jan 17 12:06:50.731207 kernel: ... fixed-purpose events: 0 Jan 17 12:06:50.731213 kernel: ... event mask: 000000000000000f Jan 17 12:06:50.731219 kernel: signal: max sigframe size: 1776 Jan 17 12:06:50.731225 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:06:50.731232 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:06:50.731238 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:06:50.731244 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:06:50.731250 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:06:50.731256 kernel: .... node #0, CPUs: #1 Jan 17 12:06:50.731261 kernel: Disabled fast string operations Jan 17 12:06:50.731267 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 17 12:06:50.731273 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 12:06:50.731279 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:06:50.731284 kernel: smpboot: Max logical packages: 128 Jan 17 12:06:50.731292 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 17 12:06:50.731297 kernel: devtmpfs: initialized Jan 17 12:06:50.731303 kernel: x86/mm: Memory block size: 128MB Jan 17 12:06:50.731309 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 17 12:06:50.731315 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:06:50.731321 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 17 12:06:50.731327 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:06:50.731333 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:06:50.731339 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:06:50.731346 kernel: audit: type=2000 audit(1737115609.066:1): state=initialized audit_enabled=0 res=1 Jan 17 12:06:50.731352 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:06:50.731358 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:06:50.731374 kernel: cpuidle: using governor menu Jan 17 12:06:50.731381 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 17 12:06:50.731387 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:06:50.731393 kernel: dca service started, version 1.12.1 Jan 17 12:06:50.731399 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 17 12:06:50.731405 kernel: PCI: Using configuration type 1 for base access Jan 17 12:06:50.731413 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:06:50.731419 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:06:50.731426 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:06:50.731432 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:06:50.731438 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:06:50.731443 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:06:50.731449 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:06:50.731455 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:06:50.731461 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:06:50.731468 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:06:50.731474 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 17 12:06:50.731480 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:06:50.731486 kernel: ACPI: Interpreter enabled Jan 17 12:06:50.731492 kernel: ACPI: PM: (supports S0 S1 S5) Jan 17 12:06:50.731498 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:06:50.731504 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:06:50.731510 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:06:50.731516 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 17 12:06:50.731523 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 17 12:06:50.731609 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:06:50.731665 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 17 12:06:50.731714 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 17 12:06:50.731723 kernel: PCI host bridge to bus 0000:00 Jan 17 12:06:50.731776 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:06:50.731823 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 17 12:06:50.731867 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:06:50.731910 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:06:50.731953 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 17 12:06:50.731995 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 17 12:06:50.732052 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 17 12:06:50.732105 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 17 12:06:50.732161 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 17 12:06:50.732215 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 17 12:06:50.732265 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 17 12:06:50.732313 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:06:50.732553 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:06:50.732807 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:06:50.732863 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:06:50.732918 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 17 12:06:50.732969 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 17 12:06:50.733018 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 17 12:06:50.733070 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 17 12:06:50.733120 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 17 12:06:50.733170 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 17 12:06:50.733222 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 17 12:06:50.733271 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 17 12:06:50.733319 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 17 12:06:50.733423 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 17 12:06:50.733477 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 17 12:06:50.735412 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:06:50.735481 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 17 12:06:50.735557 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.735612 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.735667 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.735718 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.735772 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.735823 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.735880 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.735930 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.735984 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736034 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736088 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736138 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736195 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736245 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736301 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736351 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736444 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736496 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736552 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736602 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736654 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736703 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736754 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736806 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736857 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736907 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736960 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737009 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737061 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737110 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737166 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737215 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737270 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737320 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737383 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737436 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737491 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737544 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737597 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737647 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737699 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737748 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737804 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737854 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737907 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737957 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.738009 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.738059 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.738117 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.738167 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.738220 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.738270 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.738322 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740533 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.740603 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740662 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.740718 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740769 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.740822 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740872 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.740924 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740977 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.741030 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.741079 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.741130 kernel: pci_bus 0000:01: extended config space not accessible Jan 17 12:06:50.741181 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:06:50.741234 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 12:06:50.741245 kernel: acpiphp: Slot [32] registered Jan 17 12:06:50.741252 kernel: acpiphp: Slot [33] registered Jan 17 12:06:50.741258 kernel: acpiphp: Slot [34] registered Jan 17 12:06:50.741264 kernel: acpiphp: Slot [35] registered Jan 17 12:06:50.741270 kernel: acpiphp: Slot [36] registered Jan 17 12:06:50.741276 kernel: acpiphp: Slot [37] registered Jan 17 12:06:50.741282 kernel: acpiphp: Slot [38] registered Jan 17 12:06:50.741288 kernel: acpiphp: Slot [39] registered Jan 17 12:06:50.741293 kernel: acpiphp: Slot [40] registered Jan 17 12:06:50.741301 kernel: acpiphp: Slot [41] registered Jan 17 12:06:50.741306 kernel: acpiphp: Slot [42] registered Jan 17 12:06:50.741312 kernel: acpiphp: Slot [43] registered Jan 17 12:06:50.741318 kernel: acpiphp: Slot [44] registered Jan 17 12:06:50.741324 kernel: acpiphp: Slot [45] registered Jan 17 12:06:50.741329 kernel: acpiphp: Slot [46] registered Jan 17 12:06:50.741335 kernel: acpiphp: Slot [47] registered Jan 17 12:06:50.741341 kernel: acpiphp: Slot [48] registered Jan 17 12:06:50.741347 kernel: acpiphp: Slot [49] registered Jan 17 12:06:50.741353 kernel: acpiphp: Slot [50] registered Jan 17 12:06:50.741360 kernel: acpiphp: Slot [51] registered Jan 17 12:06:50.741374 kernel: acpiphp: Slot [52] registered Jan 17 12:06:50.741381 kernel: acpiphp: Slot [53] registered Jan 17 12:06:50.741387 kernel: acpiphp: Slot [54] registered Jan 17 12:06:50.741393 kernel: acpiphp: Slot [55] registered Jan 17 12:06:50.741399 kernel: acpiphp: Slot [56] registered Jan 17 12:06:50.741404 kernel: acpiphp: Slot [57] registered Jan 17 12:06:50.741410 kernel: acpiphp: Slot [58] registered Jan 17 12:06:50.741416 kernel: acpiphp: Slot [59] registered Jan 17 12:06:50.741424 kernel: acpiphp: Slot [60] registered Jan 17 12:06:50.741430 kernel: acpiphp: Slot [61] registered Jan 17 12:06:50.741436 kernel: acpiphp: Slot [62] registered Jan 17 12:06:50.741442 kernel: acpiphp: Slot [63] registered Jan 17 12:06:50.741513 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 17 12:06:50.741568 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 17 12:06:50.741617 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 17 12:06:50.741666 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:06:50.741715 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 17 12:06:50.741767 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 17 12:06:50.741815 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 17 12:06:50.741863 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 17 12:06:50.741912 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 17 12:06:50.741967 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 17 12:06:50.742018 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 17 12:06:50.742068 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 17 12:06:50.742120 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 17 12:06:50.742170 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.742242 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 17 12:06:50.742306 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 17 12:06:50.742357 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 17 12:06:50.743085 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 17 12:06:50.743142 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 17 12:06:50.743196 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 17 12:06:50.743246 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 17 12:06:50.743296 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:06:50.743347 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 17 12:06:50.743406 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 17 12:06:50.743456 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 17 12:06:50.743505 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:06:50.743555 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 17 12:06:50.743607 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 17 12:06:50.743656 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:06:50.743705 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 17 12:06:50.743754 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 17 12:06:50.747370 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:06:50.747448 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 17 12:06:50.747503 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 17 12:06:50.747557 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:06:50.747609 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 17 12:06:50.747659 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 17 12:06:50.747708 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:06:50.747758 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 17 12:06:50.747810 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 17 12:06:50.747858 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:06:50.747913 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 17 12:06:50.747965 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 17 12:06:50.748016 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 17 12:06:50.748065 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 17 12:06:50.748115 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 17 12:06:50.748165 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 17 12:06:50.748218 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 17 12:06:50.748267 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:06:50.748316 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 17 12:06:50.749394 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 17 12:06:50.749450 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 17 12:06:50.749499 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 17 12:06:50.749548 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 17 12:06:50.749601 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 17 12:06:50.749649 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 17 12:06:50.749701 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:06:50.749751 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 17 12:06:50.749800 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 17 12:06:50.749849 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 17 12:06:50.749898 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:06:50.749949 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 17 12:06:50.750000 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 17 12:06:50.750048 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:06:50.750099 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 17 12:06:50.750147 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 17 12:06:50.750195 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:06:50.750246 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 17 12:06:50.750295 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 17 12:06:50.750343 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:06:50.751414 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 17 12:06:50.751467 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 17 12:06:50.751521 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:06:50.751573 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 17 12:06:50.751622 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 17 12:06:50.751671 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:06:50.751722 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 17 12:06:50.751770 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 17 12:06:50.751823 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 17 12:06:50.751871 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:06:50.751921 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 17 12:06:50.751970 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 17 12:06:50.752018 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 17 12:06:50.752066 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:06:50.752117 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 17 12:06:50.752168 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 17 12:06:50.752217 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 17 12:06:50.752266 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:06:50.752318 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 17 12:06:50.754438 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 17 12:06:50.754496 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:06:50.754553 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 17 12:06:50.754603 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 17 12:06:50.754655 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:06:50.754708 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 17 12:06:50.754758 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 17 12:06:50.754807 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:06:50.754857 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 17 12:06:50.754907 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 17 12:06:50.754956 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:06:50.755007 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 17 12:06:50.755059 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 17 12:06:50.755108 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:06:50.755160 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 17 12:06:50.755209 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 17 12:06:50.755257 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 17 12:06:50.755306 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:06:50.755358 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 17 12:06:50.755422 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 17 12:06:50.755475 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 17 12:06:50.755534 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:06:50.755589 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 17 12:06:50.755639 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 17 12:06:50.755687 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:06:50.755738 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 17 12:06:50.755787 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 17 12:06:50.755836 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:06:50.755889 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 17 12:06:50.755938 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 17 12:06:50.755987 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:06:50.756038 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 17 12:06:50.756087 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 17 12:06:50.756137 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:06:50.756187 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 17 12:06:50.756236 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 17 12:06:50.756288 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:06:50.756338 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 17 12:06:50.757869 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 17 12:06:50.757999 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:06:50.758010 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 17 12:06:50.758017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 17 12:06:50.758023 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 17 12:06:50.758029 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:06:50.758037 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 17 12:06:50.758043 kernel: iommu: Default domain type: Translated Jan 17 12:06:50.758049 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:06:50.758055 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:06:50.758061 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:06:50.758067 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 17 12:06:50.758073 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 17 12:06:50.758132 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 17 12:06:50.758182 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 17 12:06:50.758234 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:06:50.758243 kernel: vgaarb: loaded Jan 17 12:06:50.758249 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 17 12:06:50.758256 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 17 12:06:50.758262 kernel: clocksource: Switched to clocksource tsc-early Jan 17 12:06:50.758268 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:06:50.758274 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:06:50.758280 kernel: pnp: PnP ACPI init Jan 17 12:06:50.758332 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 17 12:06:50.758387 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 17 12:06:50.758433 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 17 12:06:50.758481 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 17 12:06:50.758529 kernel: pnp 00:06: [dma 2] Jan 17 12:06:50.758579 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 17 12:06:50.758624 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 17 12:06:50.758672 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 17 12:06:50.758680 kernel: pnp: PnP ACPI: found 8 devices Jan 17 12:06:50.758686 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:06:50.758693 kernel: NET: Registered PF_INET protocol family Jan 17 12:06:50.758699 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:06:50.758705 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:06:50.758710 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:06:50.758716 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:06:50.758722 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:06:50.758730 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:06:50.758736 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:06:50.758742 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:06:50.758747 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:06:50.758753 kernel: NET: Registered PF_XDP protocol family Jan 17 12:06:50.758804 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 17 12:06:50.758856 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 12:06:50.758909 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 12:06:50.758960 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 12:06:50.759009 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 12:06:50.759058 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 17 12:06:50.759108 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 17 12:06:50.759157 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 17 12:06:50.759209 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 17 12:06:50.759285 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 17 12:06:50.761242 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 17 12:06:50.761303 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 17 12:06:50.761354 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 17 12:06:50.761442 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 17 12:06:50.761496 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 17 12:06:50.761546 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 17 12:06:50.761596 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 17 12:06:50.761646 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 17 12:06:50.761696 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 17 12:06:50.761745 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 17 12:06:50.761796 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 17 12:06:50.761845 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 17 12:06:50.761894 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 17 12:06:50.761944 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:06:50.761993 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:06:50.762042 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762095 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762143 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762193 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762241 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762290 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762339 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762400 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762452 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762505 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762554 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762603 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762651 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762701 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762750 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762798 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762847 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762898 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762947 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762996 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763045 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763094 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763143 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763192 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763241 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763293 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763342 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763416 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763468 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763521 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763571 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763619 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763668 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763715 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763768 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763816 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763864 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763913 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763962 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764011 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764060 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764109 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764160 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764210 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764258 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764307 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764355 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764414 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764463 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764511 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764560 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764612 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764661 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764710 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764758 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764807 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764856 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764904 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764953 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765002 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765053 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765101 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765150 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765198 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765247 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765296 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765344 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765400 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765449 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765498 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765559 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765609 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765657 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765707 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765756 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765805 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765854 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765902 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765950 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.766002 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.766051 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.766099 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.766148 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.766197 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.766247 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:06:50.766298 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 17 12:06:50.766346 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 17 12:06:50.766403 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 17 12:06:50.766452 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:06:50.766510 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 17 12:06:50.766560 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 17 12:06:50.766609 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 17 12:06:50.766658 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 17 12:06:50.766708 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:06:50.766759 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 17 12:06:50.766808 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 17 12:06:50.766857 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 17 12:06:50.766909 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:06:50.766959 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 17 12:06:50.767008 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 17 12:06:50.767057 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 17 12:06:50.767106 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:06:50.767154 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 17 12:06:50.767203 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 17 12:06:50.767251 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:06:50.767300 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 17 12:06:50.767351 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 17 12:06:50.767414 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:06:50.767469 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 17 12:06:50.767521 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 17 12:06:50.767570 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:06:50.767619 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 17 12:06:50.767671 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 17 12:06:50.767720 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:06:50.767769 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 17 12:06:50.767819 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 17 12:06:50.767868 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:06:50.767921 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 17 12:06:50.767971 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 17 12:06:50.768020 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 17 12:06:50.768070 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 17 12:06:50.768121 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:06:50.768172 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 17 12:06:50.768221 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 17 12:06:50.768270 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 17 12:06:50.768319 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:06:50.768505 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 17 12:06:50.768563 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 17 12:06:50.768612 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 17 12:06:50.768660 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:06:50.768708 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 17 12:06:50.768759 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 17 12:06:50.768808 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:06:50.768856 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 17 12:06:50.768905 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 17 12:06:50.768953 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:06:50.769002 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 17 12:06:50.769050 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 17 12:06:50.769098 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:06:50.769147 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 17 12:06:50.769197 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 17 12:06:50.769245 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:06:50.769294 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 17 12:06:50.769343 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 17 12:06:50.769399 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:06:50.769449 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 17 12:06:50.769498 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 17 12:06:50.769546 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 17 12:06:50.769594 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:06:50.769643 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 17 12:06:50.769694 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 17 12:06:50.769743 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 17 12:06:50.769791 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:06:50.769841 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 17 12:06:50.769889 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 17 12:06:50.769937 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 17 12:06:50.769986 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:06:50.770034 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 17 12:06:50.770083 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 17 12:06:50.770133 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:06:50.770182 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 17 12:06:50.770230 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 17 12:06:50.770278 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:06:50.770327 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 17 12:06:50.770410 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 17 12:06:50.770462 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:06:50.770511 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 17 12:06:50.770559 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 17 12:06:50.770607 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:06:50.770660 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 17 12:06:50.770710 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 17 12:06:50.770759 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:06:50.770808 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 17 12:06:50.770857 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 17 12:06:50.770906 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 17 12:06:50.770954 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:06:50.771004 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 17 12:06:50.771054 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 17 12:06:50.771105 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 17 12:06:50.771153 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:06:50.771202 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 17 12:06:50.771250 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 17 12:06:50.771298 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:06:50.771347 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 17 12:06:50.771403 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 17 12:06:50.771452 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:06:50.771500 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 17 12:06:50.771553 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 17 12:06:50.771604 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:06:50.771653 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 17 12:06:50.771701 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 17 12:06:50.771750 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:06:50.771799 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 17 12:06:50.771847 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 17 12:06:50.771895 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:06:50.771943 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 17 12:06:50.771991 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 17 12:06:50.772042 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:06:50.772091 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 17 12:06:50.772136 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 17 12:06:50.772180 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 17 12:06:50.772223 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 17 12:06:50.772266 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 17 12:06:50.772314 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 17 12:06:50.772359 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 17 12:06:50.772414 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:06:50.772459 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 17 12:06:50.772503 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 17 12:06:50.772548 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 17 12:06:50.772592 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 17 12:06:50.772636 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 17 12:06:50.772686 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 17 12:06:50.772734 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 17 12:06:50.772778 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:06:50.772827 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 17 12:06:50.772871 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 17 12:06:50.772915 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:06:50.772963 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 17 12:06:50.773008 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 17 12:06:50.773054 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:06:50.773102 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 17 12:06:50.773146 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:06:50.773194 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 17 12:06:50.773239 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:06:50.773287 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 17 12:06:50.773335 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:06:50.773435 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 17 12:06:50.773483 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:06:50.773538 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 17 12:06:50.773593 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:06:50.773644 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 17 12:06:50.773692 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 17 12:06:50.773738 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:06:50.773786 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 17 12:06:50.773832 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 17 12:06:50.773877 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:06:50.773925 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 17 12:06:50.773972 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 17 12:06:50.774021 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:06:50.774070 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 17 12:06:50.774117 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:06:50.774165 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 17 12:06:50.774211 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:06:50.774261 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 17 12:06:50.774309 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:06:50.774359 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 17 12:06:50.774451 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:06:50.774503 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 17 12:06:50.774549 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:06:50.774597 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 17 12:06:50.774645 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 17 12:06:50.774690 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:06:50.774738 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 17 12:06:50.774784 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 17 12:06:50.774828 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:06:50.774876 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 17 12:06:50.774921 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 17 12:06:50.774969 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:06:50.775017 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 17 12:06:50.775062 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:06:50.775109 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 17 12:06:50.775154 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:06:50.775202 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 17 12:06:50.775250 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:06:50.775299 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 17 12:06:50.775344 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:06:50.776442 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 17 12:06:50.776500 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:06:50.776557 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 17 12:06:50.776607 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 17 12:06:50.776652 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:06:50.776701 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 17 12:06:50.776747 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 17 12:06:50.776792 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:06:50.776842 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 17 12:06:50.776890 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:06:50.776939 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 17 12:06:50.776986 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:06:50.777034 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 17 12:06:50.777081 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:06:50.777129 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 17 12:06:50.777175 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:06:50.777226 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 17 12:06:50.777272 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:06:50.777321 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 17 12:06:50.777893 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:06:50.777959 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:06:50.777970 kernel: PCI: CLS 32 bytes, default 64 Jan 17 12:06:50.777979 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:06:50.777986 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 17 12:06:50.777993 kernel: clocksource: Switched to clocksource tsc Jan 17 12:06:50.778000 kernel: Initialise system trusted keyrings Jan 17 12:06:50.778006 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:06:50.778012 kernel: Key type asymmetric registered Jan 17 12:06:50.778018 kernel: Asymmetric key parser 'x509' registered Jan 17 12:06:50.778025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:06:50.778031 kernel: io scheduler mq-deadline registered Jan 17 12:06:50.778039 kernel: io scheduler kyber registered Jan 17 12:06:50.778045 kernel: io scheduler bfq registered Jan 17 12:06:50.778099 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 17 12:06:50.778152 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778204 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 17 12:06:50.778253 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778304 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 17 12:06:50.778354 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778421 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 17 12:06:50.778472 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778527 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 17 12:06:50.778578 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778628 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 17 12:06:50.778677 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778731 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 17 12:06:50.778780 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778830 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 17 12:06:50.778919 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778970 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 17 12:06:50.779023 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779074 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 17 12:06:50.779123 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779173 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 17 12:06:50.779222 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779271 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 17 12:06:50.779322 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779724 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 17 12:06:50.779784 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779838 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 17 12:06:50.779890 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780255 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 17 12:06:50.780318 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780383 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 17 12:06:50.780440 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780492 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 17 12:06:50.780543 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780595 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 17 12:06:50.780648 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780699 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 17 12:06:50.780749 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780799 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 17 12:06:50.780849 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780899 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 17 12:06:50.780952 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781003 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 17 12:06:50.781054 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781104 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 17 12:06:50.781154 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781204 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 17 12:06:50.781256 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781306 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 17 12:06:50.781356 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781423 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 17 12:06:50.781473 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781528 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 17 12:06:50.781582 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781633 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 17 12:06:50.781683 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781733 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 17 12:06:50.781784 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781836 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 17 12:06:50.781885 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781935 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 17 12:06:50.782004 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.782056 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 17 12:06:50.782106 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.782118 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:06:50.782125 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:06:50.782131 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:06:50.782138 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 17 12:06:50.782144 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:06:50.782150 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:06:50.782205 kernel: rtc_cmos 00:01: registered as rtc0 Jan 17 12:06:50.782267 kernel: rtc_cmos 00:01: setting system clock to 2025-01-17T12:06:50 UTC (1737115610) Jan 17 12:06:50.782313 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 17 12:06:50.782322 kernel: intel_pstate: CPU model not supported Jan 17 12:06:50.782329 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:06:50.782335 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:06:50.782341 kernel: Segment Routing with IPv6 Jan 17 12:06:50.782347 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:06:50.782354 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:06:50.782360 kernel: Key type dns_resolver registered Jan 17 12:06:50.782430 kernel: IPI shorthand broadcast: enabled Jan 17 12:06:50.782437 kernel: sched_clock: Marking stable (893255897, 222048307)->(1173911812, -58607608) Jan 17 12:06:50.782443 kernel: registered taskstats version 1 Jan 17 12:06:50.782450 kernel: Loading compiled-in X.509 certificates Jan 17 12:06:50.782456 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:06:50.782462 kernel: Key type .fscrypt registered Jan 17 12:06:50.782468 kernel: Key type fscrypt-provisioning registered Jan 17 12:06:50.782475 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:06:50.782481 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:06:50.782489 kernel: ima: No architecture policies found Jan 17 12:06:50.782495 kernel: clk: Disabling unused clocks Jan 17 12:06:50.782502 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:06:50.782509 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:06:50.782520 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:06:50.782529 kernel: Run /init as init process Jan 17 12:06:50.782536 kernel: with arguments: Jan 17 12:06:50.782542 kernel: /init Jan 17 12:06:50.782548 kernel: with environment: Jan 17 12:06:50.782556 kernel: HOME=/ Jan 17 12:06:50.782562 kernel: TERM=linux Jan 17 12:06:50.782569 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:06:50.782577 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:06:50.782585 systemd[1]: Detected virtualization vmware. Jan 17 12:06:50.782592 systemd[1]: Detected architecture x86-64. Jan 17 12:06:50.782598 systemd[1]: Running in initrd. Jan 17 12:06:50.782604 systemd[1]: No hostname configured, using default hostname. Jan 17 12:06:50.782612 systemd[1]: Hostname set to . Jan 17 12:06:50.782619 systemd[1]: Initializing machine ID from random generator. Jan 17 12:06:50.782626 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:06:50.782632 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:50.782638 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:50.782645 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:06:50.782652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:06:50.782659 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:06:50.782666 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:06:50.782673 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:06:50.782680 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:06:50.782687 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:50.782693 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:50.782700 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:06:50.782708 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:06:50.782714 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:06:50.782721 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:06:50.782727 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:06:50.782734 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:06:50.782740 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:06:50.782747 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:06:50.782753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:50.782760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:50.782768 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:50.782774 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:06:50.782780 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:06:50.782787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:06:50.782794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:06:50.782800 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:06:50.782807 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:06:50.782813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:06:50.782819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:50.782827 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:06:50.782847 systemd-journald[216]: Collecting audit messages is disabled. Jan 17 12:06:50.782863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:50.782871 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:06:50.782878 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:06:50.782885 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:06:50.782892 kernel: Bridge firewalling registered Jan 17 12:06:50.782898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:50.782906 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:50.782913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:50.782920 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:50.782927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:06:50.782933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:06:50.782940 systemd-journald[216]: Journal started Jan 17 12:06:50.782956 systemd-journald[216]: Runtime Journal (/run/log/journal/7d84556015e14ab08d20cf4d3e5cd876) is 4.8M, max 38.6M, 33.8M free. Jan 17 12:06:50.741494 systemd-modules-load[217]: Inserted module 'overlay' Jan 17 12:06:50.761259 systemd-modules-load[217]: Inserted module 'br_netfilter' Jan 17 12:06:50.785046 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:06:50.789456 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:06:50.790268 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:50.790476 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:50.791869 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:06:50.792194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:50.797725 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:50.799468 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:06:50.803689 dracut-cmdline[245]: dracut-dracut-053 Jan 17 12:06:50.806034 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:06:50.818853 systemd-resolved[250]: Positive Trust Anchors: Jan 17 12:06:50.818863 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:06:50.818885 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:06:50.820554 systemd-resolved[250]: Defaulting to hostname 'linux'. Jan 17 12:06:50.821171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:06:50.821319 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:50.849377 kernel: SCSI subsystem initialized Jan 17 12:06:50.855377 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:06:50.862378 kernel: iscsi: registered transport (tcp) Jan 17 12:06:50.876387 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:06:50.876433 kernel: QLogic iSCSI HBA Driver Jan 17 12:06:50.895849 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:06:50.901477 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:06:50.917338 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:06:50.917398 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:06:50.917408 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:06:50.948412 kernel: raid6: avx2x4 gen() 52183 MB/s Jan 17 12:06:50.965408 kernel: raid6: avx2x2 gen() 52488 MB/s Jan 17 12:06:50.982611 kernel: raid6: avx2x1 gen() 46198 MB/s Jan 17 12:06:50.982649 kernel: raid6: using algorithm avx2x2 gen() 52488 MB/s Jan 17 12:06:51.000649 kernel: raid6: .... xor() 31108 MB/s, rmw enabled Jan 17 12:06:51.000703 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:06:51.014433 kernel: xor: automatically using best checksumming function avx Jan 17 12:06:51.112389 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:06:51.117545 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:51.122483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:51.129682 systemd-udevd[434]: Using default interface naming scheme 'v255'. Jan 17 12:06:51.132131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:51.137524 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:06:51.142866 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Jan 17 12:06:51.158594 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:51.164440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:06:51.231811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:51.235466 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:06:51.245350 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:51.246161 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:51.246286 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:51.247521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:06:51.251459 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:06:51.258153 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:51.296441 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 17 12:06:51.299423 kernel: vmw_pvscsi: using 64bit dma Jan 17 12:06:51.299446 kernel: vmw_pvscsi: max_id: 16 Jan 17 12:06:51.299455 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 17 12:06:51.305053 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 17 12:06:51.305071 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 17 12:06:51.305079 kernel: vmw_pvscsi: using MSI-X Jan 17 12:06:51.311602 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 17 12:06:51.315837 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 17 12:06:51.320231 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 17 12:06:51.323100 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 17 12:06:51.326405 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 17 12:06:51.343805 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:06:51.343816 kernel: libata version 3.00 loaded. Jan 17 12:06:51.343823 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 17 12:06:51.343899 kernel: scsi host1: ata_piix Jan 17 12:06:51.343968 kernel: scsi host2: ata_piix Jan 17 12:06:51.344027 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 17 12:06:51.344097 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 17 12:06:51.344105 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 17 12:06:51.344112 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:06:51.344120 kernel: AES CTR mode by8 optimization enabled Jan 17 12:06:51.344334 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:51.344423 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:51.344616 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:51.344719 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:51.344784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:51.344899 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:51.351071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:51.363493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:51.368457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:51.374809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:51.509386 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 17 12:06:51.514330 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 17 12:06:51.517380 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 17 12:06:51.526727 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 17 12:06:51.531723 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:06:51.531795 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 17 12:06:51.531855 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 17 12:06:51.531914 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 17 12:06:51.531973 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:51.531982 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:06:51.533722 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 17 12:06:51.550168 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:06:51.550180 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:06:51.578420 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (492) Jan 17 12:06:51.582784 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 17 12:06:51.586465 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (490) Jan 17 12:06:51.586342 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 17 12:06:51.589065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 17 12:06:51.591487 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 17 12:06:51.591784 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 17 12:06:51.597514 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:06:51.621380 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:51.626396 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:52.631377 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:52.632081 disk-uuid[593]: The operation has completed successfully. Jan 17 12:06:52.660980 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:06:52.661052 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:06:52.669455 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:06:52.671542 sh[613]: Success Jan 17 12:06:52.680414 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:06:52.724529 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:06:52.736307 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:06:52.737644 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:06:52.753389 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:06:52.753427 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:52.753440 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:06:52.753675 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:06:52.754493 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:06:52.762387 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:06:52.763944 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:06:52.771501 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 17 12:06:52.773172 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:06:52.791438 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:52.791482 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:52.791491 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:52.796438 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:06:52.801073 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:06:52.802392 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:52.804702 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:06:52.811554 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:06:52.844109 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 17 12:06:52.849570 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:06:52.885991 ignition[675]: Ignition 2.19.0 Jan 17 12:06:52.886972 ignition[675]: Stage: fetch-offline Jan 17 12:06:52.887003 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:52.887009 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:52.887065 ignition[675]: parsed url from cmdline: "" Jan 17 12:06:52.887067 ignition[675]: no config URL provided Jan 17 12:06:52.887070 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:06:52.887074 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:06:52.887574 ignition[675]: config successfully fetched Jan 17 12:06:52.887591 ignition[675]: parsing config with SHA512: bbb66fdfd44918a2d21261758adb7418d75c35ad9e3d50b732a3000a820c8520ab63976c9f3311150810323654e0a8af7e847b6b374d66c3216060d376a95ae4 Jan 17 12:06:52.890263 unknown[675]: fetched base config from "system" Jan 17 12:06:52.890268 unknown[675]: fetched user config from "vmware" Jan 17 12:06:52.890794 ignition[675]: fetch-offline: fetch-offline passed Jan 17 12:06:52.890836 ignition[675]: Ignition finished successfully Jan 17 12:06:52.891564 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:52.912203 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:52.917478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:06:52.928799 systemd-networkd[808]: lo: Link UP Jan 17 12:06:52.928805 systemd-networkd[808]: lo: Gained carrier Jan 17 12:06:52.929503 systemd-networkd[808]: Enumeration completed Jan 17 12:06:52.929554 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:06:52.929701 systemd[1]: Reached target network.target - Network. Jan 17 12:06:52.929758 systemd-networkd[808]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 17 12:06:52.933778 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 17 12:06:52.933878 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 17 12:06:52.929791 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:06:52.933289 systemd-networkd[808]: ens192: Link UP Jan 17 12:06:52.933291 systemd-networkd[808]: ens192: Gained carrier Jan 17 12:06:52.935162 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:06:52.941535 ignition[810]: Ignition 2.19.0 Jan 17 12:06:52.941542 ignition[810]: Stage: kargs Jan 17 12:06:52.941694 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:52.941703 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:52.943276 ignition[810]: kargs: kargs passed Jan 17 12:06:52.943303 ignition[810]: Ignition finished successfully Jan 17 12:06:52.944440 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:06:52.952476 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:06:52.959541 ignition[817]: Ignition 2.19.0 Jan 17 12:06:52.959548 ignition[817]: Stage: disks Jan 17 12:06:52.959647 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:52.959653 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:52.960225 ignition[817]: disks: disks passed Jan 17 12:06:52.960254 ignition[817]: Ignition finished successfully Jan 17 12:06:52.961130 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:06:52.961477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:52.961734 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:06:52.961981 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:06:52.962199 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:06:52.962444 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:06:52.966456 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:06:52.976144 systemd-fsck[825]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:06:52.977040 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:06:52.981436 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:06:53.037403 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:06:53.037445 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:06:53.037954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:06:53.046410 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:53.047942 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:06:53.048199 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:06:53.048223 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:06:53.048237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:53.050859 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:06:53.051604 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:06:53.056375 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (833) Jan 17 12:06:53.059607 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:53.059625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:53.059633 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:53.064643 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:06:53.064968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:53.080361 initrd-setup-root[857]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:06:53.082679 initrd-setup-root[864]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:06:53.084827 initrd-setup-root[871]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:06:53.086872 initrd-setup-root[878]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:06:53.136720 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:53.140450 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:06:53.142899 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:06:53.145371 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:53.158514 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:06:53.159020 ignition[946]: INFO : Ignition 2.19.0 Jan 17 12:06:53.159020 ignition[946]: INFO : Stage: mount Jan 17 12:06:53.159020 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:53.160029 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:53.160029 ignition[946]: INFO : mount: mount passed Jan 17 12:06:53.160029 ignition[946]: INFO : Ignition finished successfully Jan 17 12:06:53.160927 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:06:53.165426 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:06:53.750753 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:06:53.755501 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:53.764379 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (957) Jan 17 12:06:53.764410 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:53.764419 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:53.765969 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:53.770374 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:06:53.770771 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:53.783557 ignition[974]: INFO : Ignition 2.19.0 Jan 17 12:06:53.784417 ignition[974]: INFO : Stage: files Jan 17 12:06:53.784417 ignition[974]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:53.784417 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:53.784727 ignition[974]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:06:53.785238 ignition[974]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:06:53.785238 ignition[974]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:06:53.787288 ignition[974]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:06:53.787431 ignition[974]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:06:53.787562 unknown[974]: wrote ssh authorized keys file for user: core Jan 17 12:06:53.787760 ignition[974]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:06:53.789057 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:06:53.789272 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:06:53.827376 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:06:53.902534 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:06:53.902534 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:06:54.396387 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:06:54.574999 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:06:54.575265 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 17 12:06:54.575517 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 17 12:06:54.575517 ignition[974]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:06:54.786642 ignition[974]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:54.790934 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:54.790934 ignition[974]: INFO : files: files passed Jan 17 12:06:54.790934 ignition[974]: INFO : Ignition finished successfully Jan 17 12:06:54.790998 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:06:54.795479 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:06:54.797031 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:06:54.797590 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:06:54.797644 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:06:54.803790 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:54.803790 initrd-setup-root-after-ignition[1005]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:54.805038 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:54.805685 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:54.806212 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:06:54.806890 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:06:54.826702 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:06:54.826762 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:06:54.827059 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:06:54.827205 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:06:54.827446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:06:54.827988 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:06:54.850546 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:54.856495 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:06:54.862965 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:54.863156 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:54.863333 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:06:54.863552 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:06:54.863630 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:54.863966 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:06:54.864182 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:06:54.864484 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:06:54.864612 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:54.864829 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:54.865012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:06:54.865215 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:54.865617 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:06:54.865816 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:06:54.866002 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:06:54.866182 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:06:54.866255 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:54.866517 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:54.866769 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:54.866942 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:06:54.866988 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:54.867124 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:06:54.867184 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:54.867446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:06:54.867509 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:54.867748 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:06:54.867906 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:06:54.870387 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:54.870558 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:06:54.870762 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:06:54.870944 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:06:54.871015 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:06:54.871228 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:06:54.871274 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:06:54.871524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:06:54.871589 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:54.871866 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:06:54.871924 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:06:54.878507 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:06:54.880497 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:06:54.880603 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:06:54.880714 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:54.880915 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:06:54.880996 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:54.883528 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:06:54.884194 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:06:54.888043 ignition[1029]: INFO : Ignition 2.19.0 Jan 17 12:06:54.888976 ignition[1029]: INFO : Stage: umount Jan 17 12:06:54.888976 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:54.888976 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:54.889855 ignition[1029]: INFO : umount: umount passed Jan 17 12:06:54.890015 ignition[1029]: INFO : Ignition finished successfully Jan 17 12:06:54.890975 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:06:54.891034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:06:54.891498 systemd[1]: Stopped target network.target - Network. Jan 17 12:06:54.891723 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:06:54.891751 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:06:54.892038 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:06:54.892062 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:06:54.892306 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:06:54.892327 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:06:54.892706 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:06:54.892728 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:06:54.893058 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:06:54.893574 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:06:54.896427 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:06:54.898063 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:06:54.898137 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:06:54.898697 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:06:54.898727 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:54.901538 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:06:54.901670 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:06:54.901698 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:54.902222 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 17 12:06:54.902245 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 17 12:06:54.902405 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:54.902614 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:06:54.902664 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:06:54.905602 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:06:54.905650 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:54.905864 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:06:54.905888 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:54.906002 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:06:54.906023 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:54.914717 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:06:54.914791 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:54.915192 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:06:54.915224 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:54.915356 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:06:54.915385 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:54.915492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:06:54.915523 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:54.915698 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:06:54.915719 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:06:54.915856 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:54.915876 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:54.917581 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:06:54.917823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:06:54.917852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:54.917967 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:06:54.917989 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:54.918099 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:06:54.918120 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:54.918226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:54.918246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:54.918534 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:06:54.918581 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:06:54.922192 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:06:54.922245 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:06:55.007913 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:06:55.007978 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:06:55.008419 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:06:55.008553 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:06:55.008583 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:55.013452 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:06:55.022313 systemd[1]: Switching root. Jan 17 12:06:55.047491 systemd-journald[216]: Journal stopped Jan 17 12:06:50.728686 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:06:50.728703 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:06:50.728710 kernel: Disabled fast string operations Jan 17 12:06:50.728714 kernel: BIOS-provided physical RAM map: Jan 17 12:06:50.728718 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 17 12:06:50.728722 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 17 12:06:50.728728 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 17 12:06:50.728732 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 17 12:06:50.728737 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 17 12:06:50.728741 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 17 12:06:50.728745 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 17 12:06:50.728749 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 17 12:06:50.728753 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 17 12:06:50.728757 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 17 12:06:50.728764 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 17 12:06:50.728769 kernel: NX (Execute Disable) protection: active Jan 17 12:06:50.728774 kernel: APIC: Static calls initialized Jan 17 12:06:50.728778 kernel: SMBIOS 2.7 present. Jan 17 12:06:50.728784 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 17 12:06:50.728788 kernel: vmware: hypercall mode: 0x00 Jan 17 12:06:50.728793 kernel: Hypervisor detected: VMware Jan 17 12:06:50.728798 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 17 12:06:50.728804 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 17 12:06:50.728809 kernel: vmware: using clock offset of 3861732559 ns Jan 17 12:06:50.728813 kernel: tsc: Detected 3408.000 MHz processor Jan 17 12:06:50.728819 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:06:50.728824 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:06:50.728829 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 17 12:06:50.728834 kernel: total RAM covered: 3072M Jan 17 12:06:50.728838 kernel: Found optimal setting for mtrr clean up Jan 17 12:06:50.728844 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 17 12:06:50.728850 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 17 12:06:50.728854 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:06:50.728859 kernel: Using GB pages for direct mapping Jan 17 12:06:50.728864 kernel: ACPI: Early table checksum verification disabled Jan 17 12:06:50.728869 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 17 12:06:50.728874 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 17 12:06:50.728879 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 17 12:06:50.728884 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 17 12:06:50.728889 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 17 12:06:50.728896 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 17 12:06:50.728901 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 17 12:06:50.728906 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 17 12:06:50.728912 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 17 12:06:50.728917 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 17 12:06:50.728923 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 17 12:06:50.728928 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 17 12:06:50.728933 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 17 12:06:50.728938 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 17 12:06:50.728943 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 17 12:06:50.728948 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 17 12:06:50.728953 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 17 12:06:50.728958 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 17 12:06:50.728964 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 17 12:06:50.728969 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 17 12:06:50.728975 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 17 12:06:50.728980 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 17 12:06:50.728985 kernel: system APIC only can use physical flat Jan 17 12:06:50.728990 kernel: APIC: Switched APIC routing to: physical flat Jan 17 12:06:50.728995 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:06:50.729000 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 12:06:50.729005 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 12:06:50.729010 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 12:06:50.729015 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 12:06:50.729021 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 12:06:50.729026 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 12:06:50.729031 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 12:06:50.729037 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 17 12:06:50.729041 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 17 12:06:50.729046 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 17 12:06:50.729052 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 17 12:06:50.729056 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 17 12:06:50.729061 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 17 12:06:50.729066 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 17 12:06:50.729071 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 17 12:06:50.729077 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 17 12:06:50.729082 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 17 12:06:50.729087 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 17 12:06:50.729092 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 17 12:06:50.729097 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 17 12:06:50.729102 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 17 12:06:50.729107 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 17 12:06:50.729112 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 17 12:06:50.729117 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 17 12:06:50.729122 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 17 12:06:50.729128 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 17 12:06:50.729133 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 17 12:06:50.729138 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 17 12:06:50.729143 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 17 12:06:50.729148 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 17 12:06:50.729153 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 17 12:06:50.729158 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 17 12:06:50.729163 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 17 12:06:50.729168 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 17 12:06:50.729173 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 17 12:06:50.729179 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 17 12:06:50.729184 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 17 12:06:50.729189 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 17 12:06:50.729194 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 17 12:06:50.729199 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 17 12:06:50.729204 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 17 12:06:50.729209 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 17 12:06:50.729214 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 17 12:06:50.729219 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 17 12:06:50.729224 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 17 12:06:50.729230 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 17 12:06:50.729236 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 17 12:06:50.729240 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 17 12:06:50.729245 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 17 12:06:50.729251 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 17 12:06:50.729256 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 17 12:06:50.729260 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 17 12:06:50.729265 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 17 12:06:50.729271 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 17 12:06:50.729275 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 17 12:06:50.729282 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 17 12:06:50.729287 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 17 12:06:50.729292 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 17 12:06:50.729301 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 17 12:06:50.729307 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 17 12:06:50.729313 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 17 12:06:50.729318 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 17 12:06:50.729323 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 17 12:06:50.729329 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 17 12:06:50.729335 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 17 12:06:50.729341 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 17 12:06:50.729346 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 17 12:06:50.729351 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 17 12:06:50.729357 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 17 12:06:50.729369 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 17 12:06:50.729376 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 17 12:06:50.729382 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 17 12:06:50.729387 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 17 12:06:50.729392 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 17 12:06:50.729400 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 17 12:06:50.729405 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 17 12:06:50.729410 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 17 12:06:50.729416 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 17 12:06:50.729421 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 17 12:06:50.729426 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 17 12:06:50.729432 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 17 12:06:50.729437 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 17 12:06:50.729442 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 17 12:06:50.729448 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 17 12:06:50.729454 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 17 12:06:50.729460 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 17 12:06:50.729465 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 17 12:06:50.729470 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 17 12:06:50.729476 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 17 12:06:50.729481 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 17 12:06:50.729486 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 17 12:06:50.729492 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 17 12:06:50.729497 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 17 12:06:50.729502 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 17 12:06:50.729507 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 17 12:06:50.729514 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 17 12:06:50.729519 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 17 12:06:50.729525 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 17 12:06:50.729530 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 17 12:06:50.729535 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 17 12:06:50.729541 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 17 12:06:50.729546 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 17 12:06:50.729552 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 17 12:06:50.729557 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 17 12:06:50.729562 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 17 12:06:50.729569 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 17 12:06:50.729574 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 17 12:06:50.729579 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 17 12:06:50.729584 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 17 12:06:50.729590 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 17 12:06:50.729595 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 17 12:06:50.729601 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 17 12:06:50.729606 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 17 12:06:50.729611 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 17 12:06:50.729617 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 17 12:06:50.729623 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 17 12:06:50.729629 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 17 12:06:50.729634 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 17 12:06:50.729640 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 17 12:06:50.729645 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 17 12:06:50.729651 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 17 12:06:50.729656 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 17 12:06:50.729661 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 17 12:06:50.729667 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 17 12:06:50.729672 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 17 12:06:50.729679 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 17 12:06:50.729684 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 17 12:06:50.729689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:06:50.729695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:06:50.729701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 17 12:06:50.729706 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 17 12:06:50.729712 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 17 12:06:50.729717 kernel: Zone ranges: Jan 17 12:06:50.729723 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:06:50.729728 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 17 12:06:50.729735 kernel: Normal empty Jan 17 12:06:50.729741 kernel: Movable zone start for each node Jan 17 12:06:50.729746 kernel: Early memory node ranges Jan 17 12:06:50.729751 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 17 12:06:50.729757 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 17 12:06:50.729762 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 17 12:06:50.729768 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 17 12:06:50.729773 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:06:50.729779 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 17 12:06:50.729785 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 17 12:06:50.729791 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 17 12:06:50.729796 kernel: system APIC only can use physical flat Jan 17 12:06:50.729802 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 17 12:06:50.729807 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 17 12:06:50.729813 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 17 12:06:50.729818 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 17 12:06:50.729824 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 17 12:06:50.729829 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 17 12:06:50.729834 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 17 12:06:50.729841 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 17 12:06:50.729846 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 17 12:06:50.729851 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 17 12:06:50.729857 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 17 12:06:50.729862 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 17 12:06:50.729868 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 17 12:06:50.729873 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 17 12:06:50.729878 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 17 12:06:50.729884 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 17 12:06:50.729890 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 17 12:06:50.729896 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 17 12:06:50.729901 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 17 12:06:50.729907 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 17 12:06:50.729912 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 17 12:06:50.729918 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 17 12:06:50.729923 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 17 12:06:50.729928 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 17 12:06:50.729934 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 17 12:06:50.729939 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 17 12:06:50.729946 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 17 12:06:50.729951 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 17 12:06:50.729956 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 17 12:06:50.729962 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 17 12:06:50.729967 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 17 12:06:50.729973 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 17 12:06:50.729978 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 17 12:06:50.729983 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 17 12:06:50.729989 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 17 12:06:50.729994 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 17 12:06:50.730001 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 17 12:06:50.730006 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 17 12:06:50.730011 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 17 12:06:50.730017 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 17 12:06:50.730022 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 17 12:06:50.730027 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 17 12:06:50.730033 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 17 12:06:50.730038 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 17 12:06:50.730043 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 17 12:06:50.730049 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 17 12:06:50.730055 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 17 12:06:50.730060 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 17 12:06:50.730066 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 17 12:06:50.730071 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 17 12:06:50.730077 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 17 12:06:50.730082 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 17 12:06:50.730088 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 17 12:06:50.730093 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 17 12:06:50.730098 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 17 12:06:50.730105 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 17 12:06:50.730111 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 17 12:06:50.730116 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 17 12:06:50.730121 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 17 12:06:50.730127 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 17 12:06:50.730132 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 17 12:06:50.730137 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 17 12:06:50.730143 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 17 12:06:50.730148 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 17 12:06:50.730153 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 17 12:06:50.730160 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 17 12:06:50.730165 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 17 12:06:50.730171 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 17 12:06:50.730176 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 17 12:06:50.730182 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 17 12:06:50.730187 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 17 12:06:50.730192 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 17 12:06:50.730198 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 17 12:06:50.730203 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 17 12:06:50.730209 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 17 12:06:50.730215 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 17 12:06:50.730220 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 17 12:06:50.730225 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 17 12:06:50.730231 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 17 12:06:50.730236 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 17 12:06:50.730241 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 17 12:06:50.730247 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 17 12:06:50.730252 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 17 12:06:50.730257 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 17 12:06:50.730264 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 17 12:06:50.730269 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 17 12:06:50.730275 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 17 12:06:50.730280 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 17 12:06:50.730286 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 17 12:06:50.730291 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 17 12:06:50.730296 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 17 12:06:50.730302 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 17 12:06:50.730307 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 17 12:06:50.730313 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 17 12:06:50.730319 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 17 12:06:50.730325 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 17 12:06:50.730330 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 17 12:06:50.730336 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 17 12:06:50.730341 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 17 12:06:50.730347 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 17 12:06:50.730352 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 17 12:06:50.730358 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 17 12:06:50.730371 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 17 12:06:50.730379 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 17 12:06:50.730384 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 17 12:06:50.730390 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 17 12:06:50.730395 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 17 12:06:50.730401 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 17 12:06:50.730406 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 17 12:06:50.730411 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 17 12:06:50.730417 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 17 12:06:50.730422 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 17 12:06:50.730428 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 17 12:06:50.730434 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 17 12:06:50.730440 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 17 12:06:50.730445 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 17 12:06:50.730451 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 17 12:06:50.730456 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 17 12:06:50.730461 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 17 12:06:50.730467 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 17 12:06:50.730472 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 17 12:06:50.730477 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 17 12:06:50.730483 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 17 12:06:50.730490 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 17 12:06:50.730495 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 17 12:06:50.730500 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 17 12:06:50.730506 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 17 12:06:50.730517 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 17 12:06:50.730523 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:06:50.730529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 17 12:06:50.730534 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:06:50.730540 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 17 12:06:50.730547 kernel: TSC deadline timer available Jan 17 12:06:50.730553 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 17 12:06:50.730558 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 17 12:06:50.730564 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 17 12:06:50.730569 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:06:50.730575 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 17 12:06:50.730580 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 17 12:06:50.730586 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 17 12:06:50.730591 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 17 12:06:50.730598 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 17 12:06:50.730603 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 17 12:06:50.730609 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 17 12:06:50.730614 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 17 12:06:50.730629 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 17 12:06:50.730635 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 17 12:06:50.730641 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 17 12:06:50.730648 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 17 12:06:50.730654 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 17 12:06:50.730661 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 17 12:06:50.730666 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 17 12:06:50.730672 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 17 12:06:50.730678 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 17 12:06:50.730684 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 17 12:06:50.730689 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 17 12:06:50.730696 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:06:50.730702 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:06:50.730708 kernel: random: crng init done Jan 17 12:06:50.730714 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 17 12:06:50.730720 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 17 12:06:50.730726 kernel: printk: log_buf_len min size: 262144 bytes Jan 17 12:06:50.730732 kernel: printk: log_buf_len: 1048576 bytes Jan 17 12:06:50.730737 kernel: printk: early log buf free: 239648(91%) Jan 17 12:06:50.730743 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:06:50.730749 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:06:50.730755 kernel: Fallback order for Node 0: 0 Jan 17 12:06:50.730763 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 17 12:06:50.730768 kernel: Policy zone: DMA32 Jan 17 12:06:50.730774 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:06:50.730780 kernel: Memory: 1936348K/2096628K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 160020K reserved, 0K cma-reserved) Jan 17 12:06:50.730787 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 17 12:06:50.730794 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:06:50.730800 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:06:50.730806 kernel: Dynamic Preempt: voluntary Jan 17 12:06:50.730812 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:06:50.730818 kernel: rcu: RCU event tracing is enabled. Jan 17 12:06:50.730824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 17 12:06:50.730830 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:06:50.730836 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:06:50.730841 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:06:50.730847 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:06:50.730854 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 17 12:06:50.730860 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 17 12:06:50.730866 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 17 12:06:50.730872 kernel: Console: colour VGA+ 80x25 Jan 17 12:06:50.730878 kernel: printk: console [tty0] enabled Jan 17 12:06:50.730883 kernel: printk: console [ttyS0] enabled Jan 17 12:06:50.730889 kernel: ACPI: Core revision 20230628 Jan 17 12:06:50.730895 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 17 12:06:50.730901 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:06:50.730908 kernel: x2apic enabled Jan 17 12:06:50.730914 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:06:50.730920 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:06:50.730927 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 17 12:06:50.730933 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 17 12:06:50.730938 kernel: Disabled fast string operations Jan 17 12:06:50.730944 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:06:50.730950 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:06:50.730956 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:06:50.730964 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:06:50.730969 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:06:50.730975 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 17 12:06:50.730981 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:06:50.730987 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 17 12:06:50.730993 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 17 12:06:50.730999 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:06:50.731005 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:06:50.731011 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:06:50.731018 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 12:06:50.731024 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 12:06:50.731030 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:06:50.731036 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:06:50.731042 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:06:50.731048 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:06:50.731055 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:06:50.731061 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:06:50.731067 kernel: pid_max: default: 131072 minimum: 1024 Jan 17 12:06:50.731075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:06:50.731081 kernel: landlock: Up and running. Jan 17 12:06:50.731087 kernel: SELinux: Initializing. Jan 17 12:06:50.731092 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:06:50.731099 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:06:50.731104 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 17 12:06:50.731111 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:06:50.731117 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:06:50.731124 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 17 12:06:50.731131 kernel: Performance Events: Skylake events, core PMU driver. Jan 17 12:06:50.731136 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 17 12:06:50.731142 kernel: core: CPUID marked event: 'instructions' unavailable Jan 17 12:06:50.731148 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 17 12:06:50.731154 kernel: core: CPUID marked event: 'cache references' unavailable Jan 17 12:06:50.731160 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 17 12:06:50.731165 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 17 12:06:50.731171 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 17 12:06:50.731178 kernel: ... version: 1 Jan 17 12:06:50.731184 kernel: ... bit width: 48 Jan 17 12:06:50.731190 kernel: ... generic registers: 4 Jan 17 12:06:50.731196 kernel: ... value mask: 0000ffffffffffff Jan 17 12:06:50.731202 kernel: ... max period: 000000007fffffff Jan 17 12:06:50.731207 kernel: ... fixed-purpose events: 0 Jan 17 12:06:50.731213 kernel: ... event mask: 000000000000000f Jan 17 12:06:50.731219 kernel: signal: max sigframe size: 1776 Jan 17 12:06:50.731225 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:06:50.731232 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:06:50.731238 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:06:50.731244 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:06:50.731250 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:06:50.731256 kernel: .... node #0, CPUs: #1 Jan 17 12:06:50.731261 kernel: Disabled fast string operations Jan 17 12:06:50.731267 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 17 12:06:50.731273 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 12:06:50.731279 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:06:50.731284 kernel: smpboot: Max logical packages: 128 Jan 17 12:06:50.731292 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 17 12:06:50.731297 kernel: devtmpfs: initialized Jan 17 12:06:50.731303 kernel: x86/mm: Memory block size: 128MB Jan 17 12:06:50.731309 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 17 12:06:50.731315 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:06:50.731321 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 17 12:06:50.731327 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:06:50.731333 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:06:50.731339 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:06:50.731346 kernel: audit: type=2000 audit(1737115609.066:1): state=initialized audit_enabled=0 res=1 Jan 17 12:06:50.731352 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:06:50.731358 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:06:50.731374 kernel: cpuidle: using governor menu Jan 17 12:06:50.731381 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 17 12:06:50.731387 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:06:50.731393 kernel: dca service started, version 1.12.1 Jan 17 12:06:50.731399 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 17 12:06:50.731405 kernel: PCI: Using configuration type 1 for base access Jan 17 12:06:50.731413 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:06:50.731419 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:06:50.731426 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:06:50.731432 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:06:50.731438 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:06:50.731443 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:06:50.731449 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:06:50.731455 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:06:50.731461 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:06:50.731468 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:06:50.731474 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 17 12:06:50.731480 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:06:50.731486 kernel: ACPI: Interpreter enabled Jan 17 12:06:50.731492 kernel: ACPI: PM: (supports S0 S1 S5) Jan 17 12:06:50.731498 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:06:50.731504 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:06:50.731510 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:06:50.731516 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 17 12:06:50.731523 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 17 12:06:50.731609 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:06:50.731665 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 17 12:06:50.731714 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 17 12:06:50.731723 kernel: PCI host bridge to bus 0000:00 Jan 17 12:06:50.731776 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:06:50.731823 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 17 12:06:50.731867 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:06:50.731910 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:06:50.731953 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 17 12:06:50.731995 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 17 12:06:50.732052 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 17 12:06:50.732105 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 17 12:06:50.732161 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 17 12:06:50.732215 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 17 12:06:50.732265 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 17 12:06:50.732313 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:06:50.732553 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:06:50.732807 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:06:50.732863 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:06:50.732918 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 17 12:06:50.732969 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 17 12:06:50.733018 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 17 12:06:50.733070 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 17 12:06:50.733120 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 17 12:06:50.733170 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 17 12:06:50.733222 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 17 12:06:50.733271 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 17 12:06:50.733319 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 17 12:06:50.733423 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 17 12:06:50.733477 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 17 12:06:50.735412 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:06:50.735481 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 17 12:06:50.735557 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.735612 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.735667 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.735718 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.735772 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.735823 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.735880 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.735930 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.735984 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736034 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736088 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736138 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736195 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736245 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736301 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736351 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736444 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736496 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736552 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736602 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736654 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736703 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736754 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736806 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736857 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.736907 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.736960 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737009 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737061 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737110 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737166 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737215 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737270 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737320 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737383 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737436 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737491 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737544 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737597 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737647 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737699 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737748 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737804 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737854 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.737907 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.737957 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.738009 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.738059 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.738117 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.738167 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.738220 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.738270 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.738322 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740533 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.740603 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740662 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.740718 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740769 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.740822 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740872 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.740924 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.740977 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.741030 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 17 12:06:50.741079 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.741130 kernel: pci_bus 0000:01: extended config space not accessible Jan 17 12:06:50.741181 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:06:50.741234 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 12:06:50.741245 kernel: acpiphp: Slot [32] registered Jan 17 12:06:50.741252 kernel: acpiphp: Slot [33] registered Jan 17 12:06:50.741258 kernel: acpiphp: Slot [34] registered Jan 17 12:06:50.741264 kernel: acpiphp: Slot [35] registered Jan 17 12:06:50.741270 kernel: acpiphp: Slot [36] registered Jan 17 12:06:50.741276 kernel: acpiphp: Slot [37] registered Jan 17 12:06:50.741282 kernel: acpiphp: Slot [38] registered Jan 17 12:06:50.741288 kernel: acpiphp: Slot [39] registered Jan 17 12:06:50.741293 kernel: acpiphp: Slot [40] registered Jan 17 12:06:50.741301 kernel: acpiphp: Slot [41] registered Jan 17 12:06:50.741306 kernel: acpiphp: Slot [42] registered Jan 17 12:06:50.741312 kernel: acpiphp: Slot [43] registered Jan 17 12:06:50.741318 kernel: acpiphp: Slot [44] registered Jan 17 12:06:50.741324 kernel: acpiphp: Slot [45] registered Jan 17 12:06:50.741329 kernel: acpiphp: Slot [46] registered Jan 17 12:06:50.741335 kernel: acpiphp: Slot [47] registered Jan 17 12:06:50.741341 kernel: acpiphp: Slot [48] registered Jan 17 12:06:50.741347 kernel: acpiphp: Slot [49] registered Jan 17 12:06:50.741353 kernel: acpiphp: Slot [50] registered Jan 17 12:06:50.741360 kernel: acpiphp: Slot [51] registered Jan 17 12:06:50.741374 kernel: acpiphp: Slot [52] registered Jan 17 12:06:50.741381 kernel: acpiphp: Slot [53] registered Jan 17 12:06:50.741387 kernel: acpiphp: Slot [54] registered Jan 17 12:06:50.741393 kernel: acpiphp: Slot [55] registered Jan 17 12:06:50.741399 kernel: acpiphp: Slot [56] registered Jan 17 12:06:50.741404 kernel: acpiphp: Slot [57] registered Jan 17 12:06:50.741410 kernel: acpiphp: Slot [58] registered Jan 17 12:06:50.741416 kernel: acpiphp: Slot [59] registered Jan 17 12:06:50.741424 kernel: acpiphp: Slot [60] registered Jan 17 12:06:50.741430 kernel: acpiphp: Slot [61] registered Jan 17 12:06:50.741436 kernel: acpiphp: Slot [62] registered Jan 17 12:06:50.741442 kernel: acpiphp: Slot [63] registered Jan 17 12:06:50.741513 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 17 12:06:50.741568 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 17 12:06:50.741617 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 17 12:06:50.741666 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:06:50.741715 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 17 12:06:50.741767 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 17 12:06:50.741815 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 17 12:06:50.741863 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 17 12:06:50.741912 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 17 12:06:50.741967 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 17 12:06:50.742018 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 17 12:06:50.742068 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 17 12:06:50.742120 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 17 12:06:50.742170 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 17 12:06:50.742242 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 17 12:06:50.742306 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 17 12:06:50.742357 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 17 12:06:50.743085 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 17 12:06:50.743142 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 17 12:06:50.743196 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 17 12:06:50.743246 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 17 12:06:50.743296 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:06:50.743347 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 17 12:06:50.743406 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 17 12:06:50.743456 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 17 12:06:50.743505 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:06:50.743555 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 17 12:06:50.743607 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 17 12:06:50.743656 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:06:50.743705 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 17 12:06:50.743754 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 17 12:06:50.747370 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:06:50.747448 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 17 12:06:50.747503 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 17 12:06:50.747557 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:06:50.747609 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 17 12:06:50.747659 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 17 12:06:50.747708 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:06:50.747758 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 17 12:06:50.747810 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 17 12:06:50.747858 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:06:50.747913 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 17 12:06:50.747965 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 17 12:06:50.748016 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 17 12:06:50.748065 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 17 12:06:50.748115 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 17 12:06:50.748165 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 17 12:06:50.748218 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 17 12:06:50.748267 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:06:50.748316 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 17 12:06:50.749394 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 17 12:06:50.749450 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 17 12:06:50.749499 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 17 12:06:50.749548 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 17 12:06:50.749601 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 17 12:06:50.749649 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 17 12:06:50.749701 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:06:50.749751 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 17 12:06:50.749800 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 17 12:06:50.749849 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 17 12:06:50.749898 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:06:50.749949 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 17 12:06:50.750000 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 17 12:06:50.750048 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:06:50.750099 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 17 12:06:50.750147 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 17 12:06:50.750195 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:06:50.750246 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 17 12:06:50.750295 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 17 12:06:50.750343 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:06:50.751414 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 17 12:06:50.751467 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 17 12:06:50.751521 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:06:50.751573 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 17 12:06:50.751622 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 17 12:06:50.751671 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:06:50.751722 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 17 12:06:50.751770 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 17 12:06:50.751823 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 17 12:06:50.751871 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:06:50.751921 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 17 12:06:50.751970 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 17 12:06:50.752018 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 17 12:06:50.752066 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:06:50.752117 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 17 12:06:50.752168 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 17 12:06:50.752217 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 17 12:06:50.752266 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:06:50.752318 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 17 12:06:50.754438 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 17 12:06:50.754496 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:06:50.754553 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 17 12:06:50.754603 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 17 12:06:50.754655 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:06:50.754708 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 17 12:06:50.754758 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 17 12:06:50.754807 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:06:50.754857 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 17 12:06:50.754907 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 17 12:06:50.754956 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:06:50.755007 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 17 12:06:50.755059 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 17 12:06:50.755108 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:06:50.755160 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 17 12:06:50.755209 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 17 12:06:50.755257 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 17 12:06:50.755306 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:06:50.755358 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 17 12:06:50.755422 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 17 12:06:50.755475 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 17 12:06:50.755534 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:06:50.755589 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 17 12:06:50.755639 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 17 12:06:50.755687 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:06:50.755738 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 17 12:06:50.755787 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 17 12:06:50.755836 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:06:50.755889 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 17 12:06:50.755938 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 17 12:06:50.755987 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:06:50.756038 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 17 12:06:50.756087 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 17 12:06:50.756137 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:06:50.756187 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 17 12:06:50.756236 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 17 12:06:50.756288 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:06:50.756338 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 17 12:06:50.757869 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 17 12:06:50.757999 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:06:50.758010 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 17 12:06:50.758017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 17 12:06:50.758023 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 17 12:06:50.758029 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:06:50.758037 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 17 12:06:50.758043 kernel: iommu: Default domain type: Translated Jan 17 12:06:50.758049 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:06:50.758055 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:06:50.758061 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:06:50.758067 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 17 12:06:50.758073 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 17 12:06:50.758132 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 17 12:06:50.758182 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 17 12:06:50.758234 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:06:50.758243 kernel: vgaarb: loaded Jan 17 12:06:50.758249 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 17 12:06:50.758256 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 17 12:06:50.758262 kernel: clocksource: Switched to clocksource tsc-early Jan 17 12:06:50.758268 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:06:50.758274 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:06:50.758280 kernel: pnp: PnP ACPI init Jan 17 12:06:50.758332 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 17 12:06:50.758387 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 17 12:06:50.758433 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 17 12:06:50.758481 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 17 12:06:50.758529 kernel: pnp 00:06: [dma 2] Jan 17 12:06:50.758579 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 17 12:06:50.758624 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 17 12:06:50.758672 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 17 12:06:50.758680 kernel: pnp: PnP ACPI: found 8 devices Jan 17 12:06:50.758686 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:06:50.758693 kernel: NET: Registered PF_INET protocol family Jan 17 12:06:50.758699 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:06:50.758705 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:06:50.758710 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:06:50.758716 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:06:50.758722 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:06:50.758730 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:06:50.758736 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:06:50.758742 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:06:50.758747 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:06:50.758753 kernel: NET: Registered PF_XDP protocol family Jan 17 12:06:50.758804 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 17 12:06:50.758856 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 12:06:50.758909 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 12:06:50.758960 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 12:06:50.759009 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 12:06:50.759058 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 17 12:06:50.759108 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 17 12:06:50.759157 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 17 12:06:50.759209 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 17 12:06:50.759285 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 17 12:06:50.761242 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 17 12:06:50.761303 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 17 12:06:50.761354 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 17 12:06:50.761442 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 17 12:06:50.761496 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 17 12:06:50.761546 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 17 12:06:50.761596 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 17 12:06:50.761646 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 17 12:06:50.761696 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 17 12:06:50.761745 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 17 12:06:50.761796 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 17 12:06:50.761845 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 17 12:06:50.761894 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 17 12:06:50.761944 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:06:50.761993 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:06:50.762042 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762095 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762143 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762193 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762241 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762290 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762339 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762400 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762452 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762505 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762554 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762603 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762651 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762701 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762750 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762798 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762847 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762898 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.762947 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.762996 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763045 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763094 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763143 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763192 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763241 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763293 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763342 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763416 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763468 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763521 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763571 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763619 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763668 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763715 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763768 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763816 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763864 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.763913 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.763962 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764011 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764060 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764109 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764160 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764210 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764258 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764307 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764355 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764414 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764463 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764511 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764560 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764612 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764661 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764710 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764758 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764807 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764856 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.764904 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.764953 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765002 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765053 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765101 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765150 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765198 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765247 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765296 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765344 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765400 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765449 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765498 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765559 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765609 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765657 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765707 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765756 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765805 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765854 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.765902 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.765950 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.766002 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.766051 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.766099 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.766148 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 17 12:06:50.766197 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 17 12:06:50.766247 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:06:50.766298 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 17 12:06:50.766346 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 17 12:06:50.766403 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 17 12:06:50.766452 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:06:50.766510 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 17 12:06:50.766560 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 17 12:06:50.766609 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 17 12:06:50.766658 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 17 12:06:50.766708 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:06:50.766759 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 17 12:06:50.766808 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 17 12:06:50.766857 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 17 12:06:50.766909 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:06:50.766959 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 17 12:06:50.767008 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 17 12:06:50.767057 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 17 12:06:50.767106 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:06:50.767154 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 17 12:06:50.767203 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 17 12:06:50.767251 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:06:50.767300 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 17 12:06:50.767351 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 17 12:06:50.767414 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:06:50.767469 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 17 12:06:50.767521 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 17 12:06:50.767570 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:06:50.767619 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 17 12:06:50.767671 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 17 12:06:50.767720 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:06:50.767769 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 17 12:06:50.767819 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 17 12:06:50.767868 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:06:50.767921 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 17 12:06:50.767971 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 17 12:06:50.768020 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 17 12:06:50.768070 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 17 12:06:50.768121 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:06:50.768172 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 17 12:06:50.768221 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 17 12:06:50.768270 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 17 12:06:50.768319 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:06:50.768505 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 17 12:06:50.768563 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 17 12:06:50.768612 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 17 12:06:50.768660 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:06:50.768708 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 17 12:06:50.768759 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 17 12:06:50.768808 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:06:50.768856 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 17 12:06:50.768905 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 17 12:06:50.768953 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:06:50.769002 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 17 12:06:50.769050 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 17 12:06:50.769098 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:06:50.769147 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 17 12:06:50.769197 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 17 12:06:50.769245 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:06:50.769294 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 17 12:06:50.769343 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 17 12:06:50.769399 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:06:50.769449 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 17 12:06:50.769498 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 17 12:06:50.769546 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 17 12:06:50.769594 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:06:50.769643 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 17 12:06:50.769694 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 17 12:06:50.769743 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 17 12:06:50.769791 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:06:50.769841 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 17 12:06:50.769889 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 17 12:06:50.769937 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 17 12:06:50.769986 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:06:50.770034 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 17 12:06:50.770083 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 17 12:06:50.770133 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:06:50.770182 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 17 12:06:50.770230 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 17 12:06:50.770278 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:06:50.770327 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 17 12:06:50.770410 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 17 12:06:50.770462 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:06:50.770511 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 17 12:06:50.770559 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 17 12:06:50.770607 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:06:50.770660 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 17 12:06:50.770710 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 17 12:06:50.770759 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:06:50.770808 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 17 12:06:50.770857 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 17 12:06:50.770906 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 17 12:06:50.770954 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:06:50.771004 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 17 12:06:50.771054 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 17 12:06:50.771105 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 17 12:06:50.771153 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:06:50.771202 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 17 12:06:50.771250 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 17 12:06:50.771298 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:06:50.771347 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 17 12:06:50.771403 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 17 12:06:50.771452 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:06:50.771500 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 17 12:06:50.771553 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 17 12:06:50.771604 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:06:50.771653 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 17 12:06:50.771701 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 17 12:06:50.771750 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:06:50.771799 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 17 12:06:50.771847 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 17 12:06:50.771895 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:06:50.771943 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 17 12:06:50.771991 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 17 12:06:50.772042 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:06:50.772091 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 17 12:06:50.772136 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 17 12:06:50.772180 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 17 12:06:50.772223 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 17 12:06:50.772266 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 17 12:06:50.772314 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 17 12:06:50.772359 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 17 12:06:50.772414 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 17 12:06:50.772459 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 17 12:06:50.772503 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 17 12:06:50.772548 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 17 12:06:50.772592 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 17 12:06:50.772636 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 17 12:06:50.772686 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 17 12:06:50.772734 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 17 12:06:50.772778 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 17 12:06:50.772827 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 17 12:06:50.772871 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 17 12:06:50.772915 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 17 12:06:50.772963 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 17 12:06:50.773008 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 17 12:06:50.773054 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 17 12:06:50.773102 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 17 12:06:50.773146 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 17 12:06:50.773194 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 17 12:06:50.773239 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 17 12:06:50.773287 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 17 12:06:50.773335 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 17 12:06:50.773435 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 17 12:06:50.773483 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 17 12:06:50.773538 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 17 12:06:50.773593 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 17 12:06:50.773644 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 17 12:06:50.773692 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 17 12:06:50.773738 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 17 12:06:50.773786 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 17 12:06:50.773832 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 17 12:06:50.773877 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 17 12:06:50.773925 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 17 12:06:50.773972 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 17 12:06:50.774021 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 17 12:06:50.774070 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 17 12:06:50.774117 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 17 12:06:50.774165 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 17 12:06:50.774211 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 17 12:06:50.774261 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 17 12:06:50.774309 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 17 12:06:50.774359 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 17 12:06:50.774451 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 17 12:06:50.774503 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 17 12:06:50.774549 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 17 12:06:50.774597 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 17 12:06:50.774645 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 17 12:06:50.774690 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 17 12:06:50.774738 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 17 12:06:50.774784 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 17 12:06:50.774828 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 17 12:06:50.774876 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 17 12:06:50.774921 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 17 12:06:50.774969 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 17 12:06:50.775017 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 17 12:06:50.775062 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 17 12:06:50.775109 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 17 12:06:50.775154 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 17 12:06:50.775202 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 17 12:06:50.775250 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 17 12:06:50.775299 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 17 12:06:50.775344 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 17 12:06:50.776442 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 17 12:06:50.776500 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 17 12:06:50.776557 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 17 12:06:50.776607 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 17 12:06:50.776652 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 17 12:06:50.776701 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 17 12:06:50.776747 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 17 12:06:50.776792 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 17 12:06:50.776842 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 17 12:06:50.776890 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 17 12:06:50.776939 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 17 12:06:50.776986 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 17 12:06:50.777034 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 17 12:06:50.777081 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 17 12:06:50.777129 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 17 12:06:50.777175 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 17 12:06:50.777226 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 17 12:06:50.777272 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 17 12:06:50.777321 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 17 12:06:50.777893 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 17 12:06:50.777959 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:06:50.777970 kernel: PCI: CLS 32 bytes, default 64 Jan 17 12:06:50.777979 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:06:50.777986 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 17 12:06:50.777993 kernel: clocksource: Switched to clocksource tsc Jan 17 12:06:50.778000 kernel: Initialise system trusted keyrings Jan 17 12:06:50.778006 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:06:50.778012 kernel: Key type asymmetric registered Jan 17 12:06:50.778018 kernel: Asymmetric key parser 'x509' registered Jan 17 12:06:50.778025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:06:50.778031 kernel: io scheduler mq-deadline registered Jan 17 12:06:50.778039 kernel: io scheduler kyber registered Jan 17 12:06:50.778045 kernel: io scheduler bfq registered Jan 17 12:06:50.778099 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 17 12:06:50.778152 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778204 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 17 12:06:50.778253 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778304 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 17 12:06:50.778354 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778421 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 17 12:06:50.778472 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778527 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 17 12:06:50.778578 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778628 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 17 12:06:50.778677 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778731 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 17 12:06:50.778780 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778830 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 17 12:06:50.778919 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.778970 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 17 12:06:50.779023 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779074 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 17 12:06:50.779123 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779173 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 17 12:06:50.779222 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779271 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 17 12:06:50.779322 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779724 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 17 12:06:50.779784 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.779838 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 17 12:06:50.779890 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780255 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 17 12:06:50.780318 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780383 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 17 12:06:50.780440 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780492 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 17 12:06:50.780543 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780595 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 17 12:06:50.780648 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780699 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 17 12:06:50.780749 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780799 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 17 12:06:50.780849 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.780899 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 17 12:06:50.780952 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781003 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 17 12:06:50.781054 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781104 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 17 12:06:50.781154 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781204 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 17 12:06:50.781256 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781306 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 17 12:06:50.781356 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781423 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 17 12:06:50.781473 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781528 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 17 12:06:50.781582 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781633 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 17 12:06:50.781683 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781733 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 17 12:06:50.781784 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781836 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 17 12:06:50.781885 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.781935 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 17 12:06:50.782004 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.782056 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 17 12:06:50.782106 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 17 12:06:50.782118 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:06:50.782125 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:06:50.782131 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:06:50.782138 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 17 12:06:50.782144 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:06:50.782150 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:06:50.782205 kernel: rtc_cmos 00:01: registered as rtc0 Jan 17 12:06:50.782267 kernel: rtc_cmos 00:01: setting system clock to 2025-01-17T12:06:50 UTC (1737115610) Jan 17 12:06:50.782313 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 17 12:06:50.782322 kernel: intel_pstate: CPU model not supported Jan 17 12:06:50.782329 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:06:50.782335 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:06:50.782341 kernel: Segment Routing with IPv6 Jan 17 12:06:50.782347 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:06:50.782354 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:06:50.782360 kernel: Key type dns_resolver registered Jan 17 12:06:50.782430 kernel: IPI shorthand broadcast: enabled Jan 17 12:06:50.782437 kernel: sched_clock: Marking stable (893255897, 222048307)->(1173911812, -58607608) Jan 17 12:06:50.782443 kernel: registered taskstats version 1 Jan 17 12:06:50.782450 kernel: Loading compiled-in X.509 certificates Jan 17 12:06:50.782456 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:06:50.782462 kernel: Key type .fscrypt registered Jan 17 12:06:50.782468 kernel: Key type fscrypt-provisioning registered Jan 17 12:06:50.782475 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:06:50.782481 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:06:50.782489 kernel: ima: No architecture policies found Jan 17 12:06:50.782495 kernel: clk: Disabling unused clocks Jan 17 12:06:50.782502 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:06:50.782509 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:06:50.782520 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:06:50.782529 kernel: Run /init as init process Jan 17 12:06:50.782536 kernel: with arguments: Jan 17 12:06:50.782542 kernel: /init Jan 17 12:06:50.782548 kernel: with environment: Jan 17 12:06:50.782556 kernel: HOME=/ Jan 17 12:06:50.782562 kernel: TERM=linux Jan 17 12:06:50.782569 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:06:50.782577 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:06:50.782585 systemd[1]: Detected virtualization vmware. Jan 17 12:06:50.782592 systemd[1]: Detected architecture x86-64. Jan 17 12:06:50.782598 systemd[1]: Running in initrd. Jan 17 12:06:50.782604 systemd[1]: No hostname configured, using default hostname. Jan 17 12:06:50.782612 systemd[1]: Hostname set to . Jan 17 12:06:50.782619 systemd[1]: Initializing machine ID from random generator. Jan 17 12:06:50.782626 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:06:50.782632 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:50.782638 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:50.782645 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:06:50.782652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:06:50.782659 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:06:50.782666 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:06:50.782673 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:06:50.782680 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:06:50.782687 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:50.782693 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:50.782700 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:06:50.782708 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:06:50.782714 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:06:50.782721 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:06:50.782727 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:06:50.782734 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:06:50.782740 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:06:50.782747 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:06:50.782753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:50.782760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:50.782768 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:50.782774 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:06:50.782780 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:06:50.782787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:06:50.782794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:06:50.782800 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:06:50.782807 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:06:50.782813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:06:50.782819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:50.782827 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:06:50.782847 systemd-journald[216]: Collecting audit messages is disabled. Jan 17 12:06:50.782863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:50.782871 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:06:50.782878 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:06:50.782885 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:06:50.782892 kernel: Bridge firewalling registered Jan 17 12:06:50.782898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:50.782906 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:50.782913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:50.782920 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:50.782927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:06:50.782933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:06:50.782940 systemd-journald[216]: Journal started Jan 17 12:06:50.782956 systemd-journald[216]: Runtime Journal (/run/log/journal/7d84556015e14ab08d20cf4d3e5cd876) is 4.8M, max 38.6M, 33.8M free. Jan 17 12:06:50.741494 systemd-modules-load[217]: Inserted module 'overlay' Jan 17 12:06:50.761259 systemd-modules-load[217]: Inserted module 'br_netfilter' Jan 17 12:06:50.785046 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:06:50.789456 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:06:50.790268 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:50.790476 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:50.791869 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:06:50.792194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:50.797725 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:50.799468 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:06:50.803689 dracut-cmdline[245]: dracut-dracut-053 Jan 17 12:06:50.806034 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:06:50.818853 systemd-resolved[250]: Positive Trust Anchors: Jan 17 12:06:50.818863 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:06:50.818885 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:06:50.820554 systemd-resolved[250]: Defaulting to hostname 'linux'. Jan 17 12:06:50.821171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:06:50.821319 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:50.849377 kernel: SCSI subsystem initialized Jan 17 12:06:50.855377 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:06:50.862378 kernel: iscsi: registered transport (tcp) Jan 17 12:06:50.876387 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:06:50.876433 kernel: QLogic iSCSI HBA Driver Jan 17 12:06:50.895849 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:06:50.901477 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:06:50.917338 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:06:50.917398 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:06:50.917408 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:06:50.948412 kernel: raid6: avx2x4 gen() 52183 MB/s Jan 17 12:06:50.965408 kernel: raid6: avx2x2 gen() 52488 MB/s Jan 17 12:06:50.982611 kernel: raid6: avx2x1 gen() 46198 MB/s Jan 17 12:06:50.982649 kernel: raid6: using algorithm avx2x2 gen() 52488 MB/s Jan 17 12:06:51.000649 kernel: raid6: .... xor() 31108 MB/s, rmw enabled Jan 17 12:06:51.000703 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:06:51.014433 kernel: xor: automatically using best checksumming function avx Jan 17 12:06:51.112389 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:06:51.117545 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:51.122483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:51.129682 systemd-udevd[434]: Using default interface naming scheme 'v255'. Jan 17 12:06:51.132131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:51.137524 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:06:51.142866 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Jan 17 12:06:51.158594 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:51.164440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:06:51.231811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:51.235466 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:06:51.245350 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:51.246161 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:51.246286 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:51.247521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:06:51.251459 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:06:51.258153 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:51.296441 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 17 12:06:51.299423 kernel: vmw_pvscsi: using 64bit dma Jan 17 12:06:51.299446 kernel: vmw_pvscsi: max_id: 16 Jan 17 12:06:51.299455 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 17 12:06:51.305053 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 17 12:06:51.305071 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 17 12:06:51.305079 kernel: vmw_pvscsi: using MSI-X Jan 17 12:06:51.311602 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 17 12:06:51.315837 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 17 12:06:51.320231 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 17 12:06:51.323100 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 17 12:06:51.326405 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 17 12:06:51.343805 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:06:51.343816 kernel: libata version 3.00 loaded. Jan 17 12:06:51.343823 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 17 12:06:51.343899 kernel: scsi host1: ata_piix Jan 17 12:06:51.343968 kernel: scsi host2: ata_piix Jan 17 12:06:51.344027 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 17 12:06:51.344097 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 17 12:06:51.344105 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 17 12:06:51.344112 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:06:51.344120 kernel: AES CTR mode by8 optimization enabled Jan 17 12:06:51.344334 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:51.344423 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:51.344616 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:51.344719 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:51.344784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:51.344899 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:51.351071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:51.363493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:51.368457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:51.374809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:51.509386 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 17 12:06:51.514330 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 17 12:06:51.517380 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 17 12:06:51.526727 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 17 12:06:51.531723 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:06:51.531795 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 17 12:06:51.531855 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 17 12:06:51.531914 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 17 12:06:51.531973 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:51.531982 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:06:51.533722 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 17 12:06:51.550168 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:06:51.550180 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:06:51.578420 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (492) Jan 17 12:06:51.582784 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 17 12:06:51.586465 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (490) Jan 17 12:06:51.586342 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 17 12:06:51.589065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 17 12:06:51.591487 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 17 12:06:51.591784 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 17 12:06:51.597514 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:06:51.621380 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:51.626396 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:52.631377 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:52.632081 disk-uuid[593]: The operation has completed successfully. Jan 17 12:06:52.660980 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:06:52.661052 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:06:52.669455 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:06:52.671542 sh[613]: Success Jan 17 12:06:52.680414 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:06:52.724529 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:06:52.736307 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:06:52.737644 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:06:52.753389 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:06:52.753427 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:52.753440 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:06:52.753675 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:06:52.754493 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:06:52.762387 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:06:52.763944 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:06:52.771501 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 17 12:06:52.773172 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:06:52.791438 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:52.791482 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:52.791491 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:52.796438 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:06:52.801073 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:06:52.802392 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:52.804702 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:06:52.811554 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:06:52.844109 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 17 12:06:52.849570 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:06:52.885991 ignition[675]: Ignition 2.19.0 Jan 17 12:06:52.886972 ignition[675]: Stage: fetch-offline Jan 17 12:06:52.887003 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:52.887009 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:52.887065 ignition[675]: parsed url from cmdline: "" Jan 17 12:06:52.887067 ignition[675]: no config URL provided Jan 17 12:06:52.887070 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:06:52.887074 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:06:52.887574 ignition[675]: config successfully fetched Jan 17 12:06:52.887591 ignition[675]: parsing config with SHA512: bbb66fdfd44918a2d21261758adb7418d75c35ad9e3d50b732a3000a820c8520ab63976c9f3311150810323654e0a8af7e847b6b374d66c3216060d376a95ae4 Jan 17 12:06:52.890263 unknown[675]: fetched base config from "system" Jan 17 12:06:52.890268 unknown[675]: fetched user config from "vmware" Jan 17 12:06:52.890794 ignition[675]: fetch-offline: fetch-offline passed Jan 17 12:06:52.890836 ignition[675]: Ignition finished successfully Jan 17 12:06:52.891564 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:52.912203 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:52.917478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:06:52.928799 systemd-networkd[808]: lo: Link UP Jan 17 12:06:52.928805 systemd-networkd[808]: lo: Gained carrier Jan 17 12:06:52.929503 systemd-networkd[808]: Enumeration completed Jan 17 12:06:52.929554 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:06:52.929701 systemd[1]: Reached target network.target - Network. Jan 17 12:06:52.929758 systemd-networkd[808]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 17 12:06:52.933778 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 17 12:06:52.933878 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 17 12:06:52.929791 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:06:52.933289 systemd-networkd[808]: ens192: Link UP Jan 17 12:06:52.933291 systemd-networkd[808]: ens192: Gained carrier Jan 17 12:06:52.935162 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:06:52.941535 ignition[810]: Ignition 2.19.0 Jan 17 12:06:52.941542 ignition[810]: Stage: kargs Jan 17 12:06:52.941694 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:52.941703 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:52.943276 ignition[810]: kargs: kargs passed Jan 17 12:06:52.943303 ignition[810]: Ignition finished successfully Jan 17 12:06:52.944440 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:06:52.952476 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:06:52.959541 ignition[817]: Ignition 2.19.0 Jan 17 12:06:52.959548 ignition[817]: Stage: disks Jan 17 12:06:52.959647 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:52.959653 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:52.960225 ignition[817]: disks: disks passed Jan 17 12:06:52.960254 ignition[817]: Ignition finished successfully Jan 17 12:06:52.961130 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:06:52.961477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:52.961734 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:06:52.961981 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:06:52.962199 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:06:52.962444 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:06:52.966456 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:06:52.976144 systemd-fsck[825]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:06:52.977040 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:06:52.981436 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:06:53.037403 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:06:53.037445 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:06:53.037954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:06:53.046410 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:53.047942 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:06:53.048199 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:06:53.048223 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:06:53.048237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:53.050859 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:06:53.051604 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:06:53.056375 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (833) Jan 17 12:06:53.059607 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:53.059625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:53.059633 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:53.064643 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:06:53.064968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:53.080361 initrd-setup-root[857]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:06:53.082679 initrd-setup-root[864]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:06:53.084827 initrd-setup-root[871]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:06:53.086872 initrd-setup-root[878]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:06:53.136720 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:53.140450 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:06:53.142899 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:06:53.145371 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:53.158514 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:06:53.159020 ignition[946]: INFO : Ignition 2.19.0 Jan 17 12:06:53.159020 ignition[946]: INFO : Stage: mount Jan 17 12:06:53.159020 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:53.160029 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:53.160029 ignition[946]: INFO : mount: mount passed Jan 17 12:06:53.160029 ignition[946]: INFO : Ignition finished successfully Jan 17 12:06:53.160927 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:06:53.165426 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:06:53.750753 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:06:53.755501 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:53.764379 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (957) Jan 17 12:06:53.764410 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:06:53.764419 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:06:53.765969 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:53.770374 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:06:53.770771 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:53.783557 ignition[974]: INFO : Ignition 2.19.0 Jan 17 12:06:53.784417 ignition[974]: INFO : Stage: files Jan 17 12:06:53.784417 ignition[974]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:53.784417 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:53.784727 ignition[974]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:06:53.785238 ignition[974]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:06:53.785238 ignition[974]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:06:53.787288 ignition[974]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:06:53.787431 ignition[974]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:06:53.787562 unknown[974]: wrote ssh authorized keys file for user: core Jan 17 12:06:53.787760 ignition[974]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:06:53.789057 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:06:53.789272 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:06:53.827376 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:06:53.902534 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:06:53.902534 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:53.902955 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:06:53.904234 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:06:54.396387 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:06:54.574999 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:06:54.575265 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 17 12:06:54.575517 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 17 12:06:54.575517 ignition[974]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 12:06:54.582085 ignition[974]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:06:54.786642 ignition[974]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:54.789826 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:54.790934 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:54.790934 ignition[974]: INFO : files: files passed Jan 17 12:06:54.790934 ignition[974]: INFO : Ignition finished successfully Jan 17 12:06:54.790998 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:06:54.795479 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:06:54.797031 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:06:54.797590 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:06:54.797644 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:06:54.803790 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:54.803790 initrd-setup-root-after-ignition[1005]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:54.805038 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:54.805685 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:54.806212 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:06:54.806890 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:06:54.826702 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:06:54.826762 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:06:54.827059 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:06:54.827205 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:06:54.827446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:06:54.827988 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:06:54.850546 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:54.856495 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:06:54.862965 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:54.863156 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:54.863333 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:06:54.863552 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:06:54.863630 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:54.863966 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:06:54.864182 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:06:54.864484 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:06:54.864612 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:54.864829 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:54.865012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:06:54.865215 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:54.865617 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:06:54.865816 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:06:54.866002 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:06:54.866182 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:06:54.866255 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:54.866517 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:54.866769 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:54.866942 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:06:54.866988 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:54.867124 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:06:54.867184 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:54.867446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:06:54.867509 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:54.867748 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:06:54.867906 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:06:54.870387 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:54.870558 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:06:54.870762 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:06:54.870944 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:06:54.871015 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:06:54.871228 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:06:54.871274 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:06:54.871524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:06:54.871589 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:54.871866 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:06:54.871924 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:06:54.878507 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:06:54.880497 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:06:54.880603 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:06:54.880714 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:54.880915 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:06:54.880996 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:54.883528 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:06:54.884194 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:06:54.888043 ignition[1029]: INFO : Ignition 2.19.0 Jan 17 12:06:54.888976 ignition[1029]: INFO : Stage: umount Jan 17 12:06:54.888976 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:54.888976 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 17 12:06:54.889855 ignition[1029]: INFO : umount: umount passed Jan 17 12:06:54.890015 ignition[1029]: INFO : Ignition finished successfully Jan 17 12:06:54.890975 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:06:54.891034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:06:54.891498 systemd[1]: Stopped target network.target - Network. Jan 17 12:06:54.891723 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:06:54.891751 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:06:54.892038 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:06:54.892062 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:06:54.892306 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:06:54.892327 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:06:54.892706 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:06:54.892728 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:06:54.893058 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:06:54.893574 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:06:54.896427 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:06:54.898063 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:06:54.898137 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:06:54.898697 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:06:54.898727 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:54.901538 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:06:54.901670 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:06:54.901698 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:54.902222 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 17 12:06:54.902245 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 17 12:06:54.902405 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:54.902614 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:06:54.902664 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:06:54.905602 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:06:54.905650 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:54.905864 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:06:54.905888 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:54.906002 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:06:54.906023 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:54.914717 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:06:54.914791 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:54.915192 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:06:54.915224 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:54.915356 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:06:54.915385 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:54.915492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:06:54.915523 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:54.915698 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:06:54.915719 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:06:54.915856 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:54.915876 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:54.917581 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:06:54.917823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:06:54.917852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:54.917967 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:06:54.917989 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:54.918099 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:06:54.918120 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:54.918226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:54.918246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:54.918534 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:06:54.918581 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:06:54.922192 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:06:54.922245 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:06:55.007913 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:06:55.007978 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:06:55.008419 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:06:55.008553 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:06:55.008583 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:55.013452 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:06:55.022313 systemd[1]: Switching root. Jan 17 12:06:55.047491 systemd-journald[216]: Journal stopped Jan 17 12:06:56.331160 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jan 17 12:06:56.331183 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:06:56.331191 kernel: SELinux: policy capability open_perms=1 Jan 17 12:06:56.331196 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:06:56.331202 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:06:56.331207 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:06:56.331215 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:06:56.331221 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:06:56.331226 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:06:56.331232 systemd[1]: Successfully loaded SELinux policy in 31.494ms. Jan 17 12:06:56.331238 kernel: audit: type=1403 audit(1737115615.861:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:06:56.331244 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.346ms. Jan 17 12:06:56.331251 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:06:56.331259 systemd[1]: Detected virtualization vmware. Jan 17 12:06:56.331265 systemd[1]: Detected architecture x86-64. Jan 17 12:06:56.331272 systemd[1]: Detected first boot. Jan 17 12:06:56.331278 systemd[1]: Initializing machine ID from random generator. Jan 17 12:06:56.331286 zram_generator::config[1073]: No configuration found. Jan 17 12:06:56.331293 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:06:56.331300 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:06:56.331307 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jan 17 12:06:56.331313 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:06:56.331320 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:06:56.331326 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:06:56.331334 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:06:56.331341 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:06:56.331348 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:06:56.331355 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:06:56.331984 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:06:56.331999 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:06:56.332007 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:06:56.332015 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:06:56.332022 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:56.332029 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:56.332036 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:06:56.332042 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:06:56.332049 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:06:56.332055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:06:56.332062 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:06:56.332070 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:56.336320 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:06:56.336338 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:06:56.336346 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:06:56.336353 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:06:56.336360 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:56.336373 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:06:56.336388 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:06:56.336397 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:06:56.336404 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:06:56.336411 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:06:56.336418 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:56.336425 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:56.336433 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:56.336440 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:06:56.336447 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:06:56.336454 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:06:56.336461 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:06:56.336469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:56.336476 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:06:56.336483 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:06:56.336490 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:06:56.336498 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:06:56.336505 systemd[1]: Reached target machines.target - Containers. Jan 17 12:06:56.336512 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:06:56.336519 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jan 17 12:06:56.336526 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:06:56.336533 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:06:56.336540 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:06:56.336547 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:06:56.336554 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:06:56.336561 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:06:56.336568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:06:56.336575 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:06:56.336582 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:06:56.336589 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:06:56.336596 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:06:56.336603 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:06:56.336611 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:06:56.336618 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:06:56.336625 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:06:56.336632 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:06:56.336639 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:06:56.336661 systemd-journald[1167]: Collecting audit messages is disabled. Jan 17 12:06:56.336679 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:06:56.336686 systemd[1]: Stopped verity-setup.service. Jan 17 12:06:56.336693 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:56.336701 systemd-journald[1167]: Journal started Jan 17 12:06:56.336716 systemd-journald[1167]: Runtime Journal (/run/log/journal/d1c32cb3ea4340f1b0341d9c38fb4436) is 4.8M, max 38.6M, 33.8M free. Jan 17 12:06:56.340036 kernel: loop: module loaded Jan 17 12:06:56.340069 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:06:56.193154 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:06:56.206558 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:06:56.206747 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:06:56.340585 jq[1140]: true Jan 17 12:06:56.343517 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:06:56.342150 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:06:56.342288 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:06:56.344519 kernel: fuse: init (API version 7.39) Jan 17 12:06:56.342846 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:06:56.342982 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:06:56.343118 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:06:56.343323 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:06:56.351720 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:56.352024 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:06:56.352100 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:06:56.354549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:06:56.354641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:06:56.354867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:06:56.354945 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:06:56.355163 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:06:56.355239 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:06:56.355475 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:06:56.355556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:06:56.355777 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:56.355994 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:06:56.356220 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:06:56.370836 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:06:56.371905 jq[1182]: true Jan 17 12:06:56.374397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:06:56.376511 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:06:56.377230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:06:56.377251 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:06:56.380014 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:06:56.381372 kernel: ACPI: bus type drm_connector registered Jan 17 12:06:56.384509 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:06:56.386553 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:06:56.386708 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:56.390045 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:06:56.391543 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:06:56.391673 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:06:56.393523 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:06:56.393652 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:06:56.394552 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:06:56.400477 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:06:56.401505 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:06:56.403524 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:06:56.403645 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:06:56.403925 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:06:56.404073 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:06:56.418414 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:06:56.440353 systemd-journald[1167]: Time spent on flushing to /var/log/journal/d1c32cb3ea4340f1b0341d9c38fb4436 is 44.756ms for 1834 entries. Jan 17 12:06:56.440353 systemd-journald[1167]: System Journal (/var/log/journal/d1c32cb3ea4340f1b0341d9c38fb4436) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:06:56.492513 systemd-journald[1167]: Received client request to flush runtime journal. Jan 17 12:06:56.492543 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:06:56.454626 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:06:56.455061 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:06:56.465028 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:06:56.470614 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:56.495660 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:06:56.498871 ignition[1197]: Ignition 2.19.0 Jan 17 12:06:56.499174 ignition[1197]: deleting config from guestinfo properties Jan 17 12:06:56.515667 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:06:56.513613 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jan 17 12:06:56.509660 ignition[1197]: Successfully deleted config Jan 17 12:06:56.526049 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:06:56.528699 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:06:56.533456 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 17 12:06:56.533467 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 17 12:06:56.545229 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 12:06:56.544277 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:56.550498 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:06:56.562895 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:56.573635 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:06:56.586942 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:06:56.587625 udevadm[1236]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:06:56.591392 kernel: loop2: detected capacity change from 0 to 210664 Jan 17 12:06:56.595595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:06:56.607570 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 17 12:06:56.607581 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 17 12:06:56.611659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:56.633566 kernel: loop3: detected capacity change from 0 to 2976 Jan 17 12:06:56.678996 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:06:56.706527 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 12:06:56.733382 kernel: loop6: detected capacity change from 0 to 210664 Jan 17 12:06:56.766386 kernel: loop7: detected capacity change from 0 to 2976 Jan 17 12:06:56.790778 (sd-merge)[1244]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jan 17 12:06:56.792510 (sd-merge)[1244]: Merged extensions into '/usr'. Jan 17 12:06:56.796693 systemd[1]: Reloading requested from client PID 1209 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:06:56.796781 systemd[1]: Reloading... Jan 17 12:06:56.845766 zram_generator::config[1268]: No configuration found. Jan 17 12:06:56.907054 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:06:56.924535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:06:56.951864 systemd[1]: Reloading finished in 154 ms. Jan 17 12:06:56.977392 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:06:56.983508 systemd[1]: Starting ensure-sysext.service... Jan 17 12:06:56.985443 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:06:56.995698 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:06:56.995900 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:06:56.996397 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:06:56.996567 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 17 12:06:56.996608 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 17 12:06:57.006423 systemd[1]: Reloading requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:06:57.006433 systemd[1]: Reloading... Jan 17 12:06:57.006746 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:06:57.006749 systemd-tmpfiles[1328]: Skipping /boot Jan 17 12:06:57.019121 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:06:57.019127 systemd-tmpfiles[1328]: Skipping /boot Jan 17 12:06:57.071382 zram_generator::config[1353]: No configuration found. Jan 17 12:06:57.206181 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:06:57.221914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:06:57.250457 systemd[1]: Reloading finished in 241 ms. Jan 17 12:06:57.262596 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:57.265811 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:06:57.281506 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:06:57.282843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:06:57.285395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:06:57.288222 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:06:57.290668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:57.292051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:06:57.293516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:06:57.295502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:06:57.296480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:57.296558 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:57.297065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:06:57.297999 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:06:57.300764 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:06:57.300856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:06:57.301526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:06:57.304175 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:57.307501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:06:57.310507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:06:57.310671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:57.310744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:57.311153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:06:57.311708 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:06:57.312227 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:06:57.312710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:06:57.320937 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:57.326831 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:06:57.327703 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:06:57.330525 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:06:57.331493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:57.333084 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:06:57.333541 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:06:57.335404 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:06:57.335828 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:06:57.336975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:06:57.338670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:06:57.338759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:06:57.340941 systemd[1]: Finished ensure-sysext.service. Jan 17 12:06:57.343069 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:06:57.349572 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:06:57.351079 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:06:57.351242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:06:57.357150 augenrules[1452]: No rules Jan 17 12:06:57.357719 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:06:57.359097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:06:57.359454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:06:57.361161 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:06:57.368742 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:06:57.375025 ldconfig[1204]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:06:57.377824 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:06:57.385171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:57.387402 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:06:57.387595 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:06:57.395525 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:06:57.407710 systemd-udevd[1462]: Using default interface naming scheme 'v255'. Jan 17 12:06:57.413931 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:06:57.446630 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:57.451446 systemd-resolved[1418]: Positive Trust Anchors: Jan 17 12:06:57.451658 systemd-resolved[1418]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:06:57.451996 systemd-resolved[1418]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:06:57.456235 systemd-resolved[1418]: Defaulting to hostname 'linux'. Jan 17 12:06:57.457505 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:06:57.457981 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:06:57.458493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:57.461541 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:06:57.461732 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:06:57.488780 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:06:57.489075 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:06:57.492690 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:06:57.517131 systemd-networkd[1473]: lo: Link UP Jan 17 12:06:57.517141 systemd-networkd[1473]: lo: Gained carrier Jan 17 12:06:57.518723 systemd-networkd[1473]: Enumeration completed Jan 17 12:06:57.518791 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:06:57.519027 systemd[1]: Reached target network.target - Network. Jan 17 12:06:57.520700 systemd-networkd[1473]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jan 17 12:06:57.523815 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 17 12:06:57.524025 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 17 12:06:57.524504 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:06:57.525133 systemd-networkd[1473]: ens192: Link UP Jan 17 12:06:57.525244 systemd-networkd[1473]: ens192: Gained carrier Jan 17 12:06:57.529866 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Jan 17 12:06:57.539378 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:06:57.548411 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:06:57.554376 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1478) Jan 17 12:06:57.600045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 17 12:06:57.603504 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:06:57.620108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:06:57.634444 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jan 17 12:06:57.643554 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jan 17 12:06:57.648482 kernel: Guest personality initialized and is active Jan 17 12:06:57.651375 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 17 12:06:57.651405 kernel: Initialized host personality Jan 17 12:06:57.653379 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:06:57.667966 (udev-worker)[1487]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jan 17 12:06:57.672380 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:06:57.674381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:57.688686 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:06:57.693514 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:06:57.718198 lvm[1515]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:06:57.739406 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:06:57.739677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:57.743681 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:06:57.746190 lvm[1517]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:06:57.770698 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:57.771028 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:06:57.771914 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:06:57.772105 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:06:57.772253 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:06:57.772506 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:06:57.772674 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:06:57.772794 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:06:57.772922 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:06:57.772947 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:06:57.773048 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:06:57.773711 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:06:57.775022 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:06:57.778629 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:06:57.779192 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:06:57.779349 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:06:57.779458 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:06:57.779574 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:06:57.779594 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:06:57.780512 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:06:57.781542 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:06:57.784534 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:06:57.786469 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:06:57.788448 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:06:57.791224 jq[1526]: false Jan 17 12:06:57.790484 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:06:57.791600 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:06:57.793647 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:06:57.796011 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:06:57.799511 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:06:57.799853 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:06:57.800316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:06:57.801524 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:06:57.804448 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:06:57.806475 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jan 17 12:06:57.814703 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:06:57.814833 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:06:57.817591 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:06:57.817723 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:06:57.823918 update_engine[1533]: I20250117 12:06:57.823869 1533 main.cc:92] Flatcar Update Engine starting Jan 17 12:06:57.827672 dbus-daemon[1525]: [system] SELinux support is enabled Jan 17 12:06:57.828176 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:06:57.829127 update_engine[1533]: I20250117 12:06:57.829099 1533 update_check_scheduler.cc:74] Next update check in 8m28s Jan 17 12:06:57.830656 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:06:57.830677 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:06:57.832422 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:06:57.833339 jq[1534]: true Jan 17 12:06:57.832437 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:06:57.849795 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:06:57.853194 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jan 17 12:06:57.853942 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:06:57.854104 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:06:57.857209 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:06:57.861777 extend-filesystems[1527]: Found loop4 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found loop5 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found loop6 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found loop7 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found sda Jan 17 12:06:57.861777 extend-filesystems[1527]: Found sda1 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found sda2 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found sda3 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found usr Jan 17 12:06:57.861777 extend-filesystems[1527]: Found sda4 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found sda6 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found sda7 Jan 17 12:06:57.861777 extend-filesystems[1527]: Found sda9 Jan 17 12:06:57.861777 extend-filesystems[1527]: Checking size of /dev/sda9 Jan 17 12:06:57.866428 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jan 17 12:06:57.871539 jq[1554]: true Jan 17 12:06:57.876545 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:06:57.880146 tar[1537]: linux-amd64/helm Jan 17 12:06:57.891866 extend-filesystems[1527]: Old size kept for /dev/sda9 Jan 17 12:06:57.892057 extend-filesystems[1527]: Found sr0 Jan 17 12:06:57.893106 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:06:57.893217 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:06:57.894516 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jan 17 12:06:57.916873 systemd-logind[1532]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:06:57.919413 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:06:57.923465 systemd-logind[1532]: New seat seat0. Jan 17 12:06:57.925051 unknown[1555]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jan 17 12:06:57.927087 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:06:57.928542 unknown[1555]: Core dump limit set to -1 Jan 17 12:06:57.943130 kernel: NET: Registered PF_VSOCK protocol family Jan 17 12:06:57.968788 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:06:57.966830 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:06:57.968658 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:06:57.996814 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1484) Jan 17 12:06:58.050150 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:06:58.214084 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:06:58.227804 containerd[1550]: time="2025-01-17T12:06:58.227760641Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:06:58.247841 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:06:58.256097 containerd[1550]: time="2025-01-17T12:06:58.255795226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:58.255933 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:06:58.258042 containerd[1550]: time="2025-01-17T12:06:58.257971579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:58.258042 containerd[1550]: time="2025-01-17T12:06:58.257997236Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:06:58.258042 containerd[1550]: time="2025-01-17T12:06:58.258008157Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:06:58.258103 containerd[1550]: time="2025-01-17T12:06:58.258088749Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:06:58.258103 containerd[1550]: time="2025-01-17T12:06:58.258099077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:58.258273 containerd[1550]: time="2025-01-17T12:06:58.258132329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:58.258273 containerd[1550]: time="2025-01-17T12:06:58.258146590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:58.258273 containerd[1550]: time="2025-01-17T12:06:58.258243153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:58.258273 containerd[1550]: time="2025-01-17T12:06:58.258252013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:58.258273 containerd[1550]: time="2025-01-17T12:06:58.258259279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:58.258273 containerd[1550]: time="2025-01-17T12:06:58.258264833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:58.258357 containerd[1550]: time="2025-01-17T12:06:58.258307389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:58.259538 containerd[1550]: time="2025-01-17T12:06:58.259444534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:58.259538 containerd[1550]: time="2025-01-17T12:06:58.259512832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:58.259538 containerd[1550]: time="2025-01-17T12:06:58.259522216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:06:58.259586 containerd[1550]: time="2025-01-17T12:06:58.259564649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:06:58.259600 containerd[1550]: time="2025-01-17T12:06:58.259590903Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:06:58.262489 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:06:58.262850 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:06:58.264511 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:06:58.270701 containerd[1550]: time="2025-01-17T12:06:58.270679033Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:06:58.270803 containerd[1550]: time="2025-01-17T12:06:58.270792192Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:06:58.270921 containerd[1550]: time="2025-01-17T12:06:58.270911061Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:06:58.270969 containerd[1550]: time="2025-01-17T12:06:58.270959235Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:06:58.271017 containerd[1550]: time="2025-01-17T12:06:58.271006826Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:06:58.271203 containerd[1550]: time="2025-01-17T12:06:58.271190117Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:06:58.271525 containerd[1550]: time="2025-01-17T12:06:58.271514266Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:06:58.271626 containerd[1550]: time="2025-01-17T12:06:58.271617386Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:06:58.271674 containerd[1550]: time="2025-01-17T12:06:58.271662780Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:06:58.271756 containerd[1550]: time="2025-01-17T12:06:58.271744904Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:06:58.271808 containerd[1550]: time="2025-01-17T12:06:58.271798895Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:06:58.271873 containerd[1550]: time="2025-01-17T12:06:58.271845749Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:06:58.271955 containerd[1550]: time="2025-01-17T12:06:58.271944411Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:06:58.272008 containerd[1550]: time="2025-01-17T12:06:58.271998182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:06:58.272052 containerd[1550]: time="2025-01-17T12:06:58.272042285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:06:58.272133 containerd[1550]: time="2025-01-17T12:06:58.272094993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:06:58.272174 containerd[1550]: time="2025-01-17T12:06:58.272166454Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:06:58.272213 containerd[1550]: time="2025-01-17T12:06:58.272203601Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:06:58.272262 containerd[1550]: time="2025-01-17T12:06:58.272254527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272301 containerd[1550]: time="2025-01-17T12:06:58.272289344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272393 containerd[1550]: time="2025-01-17T12:06:58.272355136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272440 containerd[1550]: time="2025-01-17T12:06:58.272432117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272473 containerd[1550]: time="2025-01-17T12:06:58.272466372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272545 containerd[1550]: time="2025-01-17T12:06:58.272533867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272590 containerd[1550]: time="2025-01-17T12:06:58.272580388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272636 containerd[1550]: time="2025-01-17T12:06:58.272626621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272706 containerd[1550]: time="2025-01-17T12:06:58.272697676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272751 containerd[1550]: time="2025-01-17T12:06:58.272743080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272790 containerd[1550]: time="2025-01-17T12:06:58.272780199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272827 containerd[1550]: time="2025-01-17T12:06:58.272820195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272982 containerd[1550]: time="2025-01-17T12:06:58.272894369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272982 containerd[1550]: time="2025-01-17T12:06:58.272911401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:06:58.272982 containerd[1550]: time="2025-01-17T12:06:58.272927080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272982 containerd[1550]: time="2025-01-17T12:06:58.272936496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.272982 containerd[1550]: time="2025-01-17T12:06:58.272942380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:06:58.273166 containerd[1550]: time="2025-01-17T12:06:58.273113209Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:06:58.273166 containerd[1550]: time="2025-01-17T12:06:58.273135380Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:06:58.273166 containerd[1550]: time="2025-01-17T12:06:58.273148655Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:06:58.273328 containerd[1550]: time="2025-01-17T12:06:58.273315680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:06:58.273522 containerd[1550]: time="2025-01-17T12:06:58.273372681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.273522 containerd[1550]: time="2025-01-17T12:06:58.273397213Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:06:58.273522 containerd[1550]: time="2025-01-17T12:06:58.273407421Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:06:58.273522 containerd[1550]: time="2025-01-17T12:06:58.273414706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:06:58.273763 containerd[1550]: time="2025-01-17T12:06:58.273690797Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:06:58.273986 containerd[1550]: time="2025-01-17T12:06:58.273873011Z" level=info msg="Connect containerd service" Jan 17 12:06:58.273986 containerd[1550]: time="2025-01-17T12:06:58.273898817Z" level=info msg="using legacy CRI server" Jan 17 12:06:58.273986 containerd[1550]: time="2025-01-17T12:06:58.273904942Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:06:58.274199 containerd[1550]: time="2025-01-17T12:06:58.273959012Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:06:58.274551 containerd[1550]: time="2025-01-17T12:06:58.274534539Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:06:58.274710 containerd[1550]: time="2025-01-17T12:06:58.274682567Z" level=info msg="Start subscribing containerd event" Jan 17 12:06:58.274848 containerd[1550]: time="2025-01-17T12:06:58.274837933Z" level=info msg="Start recovering state" Jan 17 12:06:58.274922 containerd[1550]: time="2025-01-17T12:06:58.274913409Z" level=info msg="Start event monitor" Jan 17 12:06:58.275066 containerd[1550]: time="2025-01-17T12:06:58.275057938Z" level=info msg="Start snapshots syncer" Jan 17 12:06:58.275131 containerd[1550]: time="2025-01-17T12:06:58.275092703Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:06:58.275131 containerd[1550]: time="2025-01-17T12:06:58.275099407Z" level=info msg="Start streaming server" Jan 17 12:06:58.275282 containerd[1550]: time="2025-01-17T12:06:58.275039435Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:06:58.275282 containerd[1550]: time="2025-01-17T12:06:58.275243057Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:06:58.275839 containerd[1550]: time="2025-01-17T12:06:58.275593548Z" level=info msg="containerd successfully booted in 0.049338s" Jan 17 12:06:58.275689 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:06:58.287261 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:06:58.293623 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:06:58.296641 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:06:58.298221 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:06:58.380130 tar[1537]: linux-amd64/LICENSE Jan 17 12:06:58.380130 tar[1537]: linux-amd64/README.md Jan 17 12:06:58.387416 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:06:59.022481 systemd-networkd[1473]: ens192: Gained IPv6LL Jan 17 12:06:59.022855 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Jan 17 12:06:59.023806 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:06:59.024741 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:06:59.029670 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jan 17 12:06:59.035747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:59.039194 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:06:59.073733 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:06:59.074845 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:06:59.075171 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jan 17 12:06:59.075837 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:07:00.677637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:00.677974 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:07:00.678533 systemd[1]: Startup finished in 976ms (kernel) + 5.234s (initrd) + 4.847s (userspace) = 11.058s. Jan 17 12:07:00.683072 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:07:00.843192 login[1669]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:07:00.844784 login[1670]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:07:00.853543 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:07:00.858575 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:07:00.860487 systemd-logind[1532]: New session 1 of user core. Jan 17 12:07:00.863413 systemd-logind[1532]: New session 2 of user core. Jan 17 12:07:00.868516 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:07:00.875619 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:07:00.877835 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:07:00.949759 systemd[1711]: Queued start job for default target default.target. Jan 17 12:07:00.957162 systemd[1711]: Created slice app.slice - User Application Slice. Jan 17 12:07:00.957182 systemd[1711]: Reached target paths.target - Paths. Jan 17 12:07:00.957191 systemd[1711]: Reached target timers.target - Timers. Jan 17 12:07:00.957902 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:07:00.964944 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:07:00.964977 systemd[1711]: Reached target sockets.target - Sockets. Jan 17 12:07:00.964985 systemd[1711]: Reached target basic.target - Basic System. Jan 17 12:07:00.965010 systemd[1711]: Reached target default.target - Main User Target. Jan 17 12:07:00.965027 systemd[1711]: Startup finished in 82ms. Jan 17 12:07:00.965043 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:07:00.966044 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:07:00.966606 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:07:02.366318 kubelet[1704]: E0117 12:07:02.366270 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:07:02.368175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:07:02.368268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:07:12.576968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:07:12.588589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:12.919774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:12.922330 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:07:12.956330 kubelet[1754]: E0117 12:07:12.956297 1754 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:07:12.958939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:07:12.959025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:07:23.076819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:07:23.082514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:23.318286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:23.321627 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:07:23.347003 kubelet[1770]: E0117 12:07:23.346934 1770 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:07:23.348172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:07:23.348251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:08:42.248409 systemd-timesyncd[1451]: Contacted time server 66.85.78.80:123 (2.flatcar.pool.ntp.org). Jan 17 12:08:42.248451 systemd-timesyncd[1451]: Initial clock synchronization to Fri 2025-01-17 12:08:42.248261 UTC. Jan 17 12:08:42.248493 systemd-resolved[1418]: Clock change detected. Flushing caches. Jan 17 12:08:46.506681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:08:46.518447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:08:46.849952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:08:46.853541 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:08:46.898589 kubelet[1785]: E0117 12:08:46.898545 1785 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:08:46.899989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:08:46.900092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:08:51.092565 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:08:51.094039 systemd[1]: Started sshd@0-139.178.70.108:22-147.75.109.163:56992.service - OpenSSH per-connection server daemon (147.75.109.163:56992). Jan 17 12:08:51.129789 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 56992 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:08:51.130582 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:51.132901 systemd-logind[1532]: New session 3 of user core. Jan 17 12:08:51.139387 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:08:51.194532 systemd[1]: Started sshd@1-139.178.70.108:22-147.75.109.163:57000.service - OpenSSH per-connection server daemon (147.75.109.163:57000). Jan 17 12:08:51.227191 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 57000 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:08:51.228201 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:51.231245 systemd-logind[1532]: New session 4 of user core. Jan 17 12:08:51.238384 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:08:51.286401 sshd[1800]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:51.294679 systemd[1]: sshd@1-139.178.70.108:22-147.75.109.163:57000.service: Deactivated successfully. Jan 17 12:08:51.295485 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:08:51.296207 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:08:51.296874 systemd[1]: Started sshd@2-139.178.70.108:22-147.75.109.163:57016.service - OpenSSH per-connection server daemon (147.75.109.163:57016). Jan 17 12:08:51.298473 systemd-logind[1532]: Removed session 4. Jan 17 12:08:51.333748 sshd[1807]: Accepted publickey for core from 147.75.109.163 port 57016 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:08:51.334560 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:51.336861 systemd-logind[1532]: New session 5 of user core. Jan 17 12:08:51.344400 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:08:51.391130 sshd[1807]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:51.403733 systemd[1]: sshd@2-139.178.70.108:22-147.75.109.163:57016.service: Deactivated successfully. Jan 17 12:08:51.404809 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:08:51.405552 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:08:51.410481 systemd[1]: Started sshd@3-139.178.70.108:22-147.75.109.163:57032.service - OpenSSH per-connection server daemon (147.75.109.163:57032). Jan 17 12:08:51.411434 systemd-logind[1532]: Removed session 5. Jan 17 12:08:51.438950 sshd[1814]: Accepted publickey for core from 147.75.109.163 port 57032 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:08:51.439766 sshd[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:51.442009 systemd-logind[1532]: New session 6 of user core. Jan 17 12:08:51.456408 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:08:51.504570 sshd[1814]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:51.517806 systemd[1]: sshd@3-139.178.70.108:22-147.75.109.163:57032.service: Deactivated successfully. Jan 17 12:08:51.518644 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:08:51.519137 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:08:51.527521 systemd[1]: Started sshd@4-139.178.70.108:22-147.75.109.163:57036.service - OpenSSH per-connection server daemon (147.75.109.163:57036). Jan 17 12:08:51.528570 systemd-logind[1532]: Removed session 6. Jan 17 12:08:51.555822 sshd[1821]: Accepted publickey for core from 147.75.109.163 port 57036 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:08:51.556587 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:51.558921 systemd-logind[1532]: New session 7 of user core. Jan 17 12:08:51.567391 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:08:51.622758 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:08:51.623128 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:08:51.631511 sudo[1824]: pam_unix(sudo:session): session closed for user root Jan 17 12:08:51.632509 sshd[1821]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:51.640642 systemd[1]: sshd@4-139.178.70.108:22-147.75.109.163:57036.service: Deactivated successfully. Jan 17 12:08:51.641451 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:08:51.642231 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:08:51.642953 systemd[1]: Started sshd@5-139.178.70.108:22-147.75.109.163:57052.service - OpenSSH per-connection server daemon (147.75.109.163:57052). Jan 17 12:08:51.645480 systemd-logind[1532]: Removed session 7. Jan 17 12:08:51.673984 sshd[1829]: Accepted publickey for core from 147.75.109.163 port 57052 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:08:51.674796 sshd[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:51.678157 systemd-logind[1532]: New session 8 of user core. Jan 17 12:08:51.684394 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:08:51.731750 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:08:51.732084 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:08:51.733848 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 17 12:08:51.736595 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:08:51.736745 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:08:51.745433 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:08:51.746356 auditctl[1836]: No rules Jan 17 12:08:51.746521 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:08:51.746645 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:08:51.748119 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:08:51.763241 augenrules[1854]: No rules Jan 17 12:08:51.763916 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:08:51.764621 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 17 12:08:51.766177 sshd[1829]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:51.769689 systemd[1]: sshd@5-139.178.70.108:22-147.75.109.163:57052.service: Deactivated successfully. Jan 17 12:08:51.770430 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:08:51.771337 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:08:51.772001 systemd[1]: Started sshd@6-139.178.70.108:22-147.75.109.163:57058.service - OpenSSH per-connection server daemon (147.75.109.163:57058). Jan 17 12:08:51.773431 systemd-logind[1532]: Removed session 8. Jan 17 12:08:51.802864 sshd[1862]: Accepted publickey for core from 147.75.109.163 port 57058 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:08:51.803668 sshd[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:08:51.806234 systemd-logind[1532]: New session 9 of user core. Jan 17 12:08:51.817396 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:08:51.864995 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:08:51.865155 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:08:52.133531 (dockerd)[1880]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:08:52.133543 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:08:52.405101 dockerd[1880]: time="2025-01-17T12:08:52.404781943Z" level=info msg="Starting up" Jan 17 12:08:52.480940 dockerd[1880]: time="2025-01-17T12:08:52.480807829Z" level=info msg="Loading containers: start." Jan 17 12:08:52.543300 kernel: Initializing XFRM netlink socket Jan 17 12:08:52.597394 systemd-networkd[1473]: docker0: Link UP Jan 17 12:08:52.607012 dockerd[1880]: time="2025-01-17T12:08:52.606962041Z" level=info msg="Loading containers: done." Jan 17 12:08:52.616238 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1056175703-merged.mount: Deactivated successfully. Jan 17 12:08:52.625627 dockerd[1880]: time="2025-01-17T12:08:52.625602769Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:08:52.625682 dockerd[1880]: time="2025-01-17T12:08:52.625669773Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:08:52.625740 dockerd[1880]: time="2025-01-17T12:08:52.625727632Z" level=info msg="Daemon has completed initialization" Jan 17 12:08:52.669325 dockerd[1880]: time="2025-01-17T12:08:52.669013969Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:08:52.669480 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:08:54.004922 containerd[1550]: time="2025-01-17T12:08:54.004895100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:08:54.627898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693168941.mount: Deactivated successfully. Jan 17 12:08:55.553347 update_engine[1533]: I20250117 12:08:55.553304 1533 update_attempter.cc:509] Updating boot flags... Jan 17 12:08:55.579732 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2085) Jan 17 12:08:55.697367 containerd[1550]: time="2025-01-17T12:08:55.697334924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:55.697938 containerd[1550]: time="2025-01-17T12:08:55.697912088Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 17 12:08:55.698518 containerd[1550]: time="2025-01-17T12:08:55.698127451Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:55.699635 containerd[1550]: time="2025-01-17T12:08:55.699613888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:55.700294 containerd[1550]: time="2025-01-17T12:08:55.700205844Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.695288547s" Jan 17 12:08:55.700294 containerd[1550]: time="2025-01-17T12:08:55.700226386Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 17 12:08:55.712679 containerd[1550]: time="2025-01-17T12:08:55.712655939Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:08:57.006575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:08:57.012443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:08:57.079887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:08:57.082429 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:08:57.095989 containerd[1550]: time="2025-01-17T12:08:57.095955425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:57.101067 containerd[1550]: time="2025-01-17T12:08:57.101035407Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 17 12:08:57.114143 kubelet[2108]: E0117 12:08:57.114113 2108 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:08:57.115098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:08:57.115177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:08:57.323403 containerd[1550]: time="2025-01-17T12:08:57.323025460Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:57.331444 containerd[1550]: time="2025-01-17T12:08:57.331399113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:57.332425 containerd[1550]: time="2025-01-17T12:08:57.332225376Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.619545019s" Jan 17 12:08:57.332425 containerd[1550]: time="2025-01-17T12:08:57.332249128Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 17 12:08:57.346569 containerd[1550]: time="2025-01-17T12:08:57.346540262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:08:58.635880 containerd[1550]: time="2025-01-17T12:08:58.635837116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:58.643149 containerd[1550]: time="2025-01-17T12:08:58.643115611Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 17 12:08:58.657943 containerd[1550]: time="2025-01-17T12:08:58.657907783Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:58.669997 containerd[1550]: time="2025-01-17T12:08:58.669957162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:58.670896 containerd[1550]: time="2025-01-17T12:08:58.670632001Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.32406685s" Jan 17 12:08:58.670896 containerd[1550]: time="2025-01-17T12:08:58.670654873Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 17 12:08:58.687400 containerd[1550]: time="2025-01-17T12:08:58.687345981Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:08:59.752240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949863726.mount: Deactivated successfully. Jan 17 12:09:00.261047 containerd[1550]: time="2025-01-17T12:09:00.260970099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:00.261494 containerd[1550]: time="2025-01-17T12:09:00.261460962Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 17 12:09:00.261999 containerd[1550]: time="2025-01-17T12:09:00.261968565Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:00.263213 containerd[1550]: time="2025-01-17T12:09:00.263179446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:00.263820 containerd[1550]: time="2025-01-17T12:09:00.263716406Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.576205984s" Jan 17 12:09:00.263820 containerd[1550]: time="2025-01-17T12:09:00.263740667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 17 12:09:00.279606 containerd[1550]: time="2025-01-17T12:09:00.279577096Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:09:00.757261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134047066.mount: Deactivated successfully. Jan 17 12:09:01.369705 containerd[1550]: time="2025-01-17T12:09:01.368743455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:01.369705 containerd[1550]: time="2025-01-17T12:09:01.369650167Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:09:01.380320 containerd[1550]: time="2025-01-17T12:09:01.380291479Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:01.388355 containerd[1550]: time="2025-01-17T12:09:01.388321675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:01.389067 containerd[1550]: time="2025-01-17T12:09:01.389024867Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.109400846s" Jan 17 12:09:01.389067 containerd[1550]: time="2025-01-17T12:09:01.389054023Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:09:01.404586 containerd[1550]: time="2025-01-17T12:09:01.404562723Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:09:01.918714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684183597.mount: Deactivated successfully. Jan 17 12:09:01.920382 containerd[1550]: time="2025-01-17T12:09:01.920304638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:01.920757 containerd[1550]: time="2025-01-17T12:09:01.920733662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:09:01.921323 containerd[1550]: time="2025-01-17T12:09:01.920781835Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:01.921982 containerd[1550]: time="2025-01-17T12:09:01.921959577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:01.922666 containerd[1550]: time="2025-01-17T12:09:01.922429462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 517.714189ms" Jan 17 12:09:01.922666 containerd[1550]: time="2025-01-17T12:09:01.922446133Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:09:01.934303 containerd[1550]: time="2025-01-17T12:09:01.934268690Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:09:02.389767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873428064.mount: Deactivated successfully. Jan 17 12:09:04.623359 containerd[1550]: time="2025-01-17T12:09:04.623328239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:04.624317 containerd[1550]: time="2025-01-17T12:09:04.624297809Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 17 12:09:04.624775 containerd[1550]: time="2025-01-17T12:09:04.624753286Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:04.626399 containerd[1550]: time="2025-01-17T12:09:04.626381686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:04.627157 containerd[1550]: time="2025-01-17T12:09:04.627046742Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.692749431s" Jan 17 12:09:04.627157 containerd[1550]: time="2025-01-17T12:09:04.627064694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 17 12:09:06.950969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:06.963692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:06.977941 systemd[1]: Reloading requested from client PID 2305 ('systemctl') (unit session-9.scope)... Jan 17 12:09:06.978041 systemd[1]: Reloading... Jan 17 12:09:07.046306 zram_generator::config[2343]: No configuration found. Jan 17 12:09:07.104055 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:09:07.118915 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:07.161686 systemd[1]: Reloading finished in 183 ms. Jan 17 12:09:07.200516 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:09:07.200573 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:09:07.200697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:07.206457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:07.636408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:07.639181 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:09:07.664436 kubelet[2410]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:07.664436 kubelet[2410]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:09:07.664436 kubelet[2410]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:07.672995 kubelet[2410]: I0117 12:09:07.672965 2410 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:09:07.953561 kubelet[2410]: I0117 12:09:07.953506 2410 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:09:07.953561 kubelet[2410]: I0117 12:09:07.953524 2410 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:09:07.953791 kubelet[2410]: I0117 12:09:07.953673 2410 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:09:07.970402 kubelet[2410]: I0117 12:09:07.970384 2410 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:09:07.971680 kubelet[2410]: E0117 12:09:07.971632 2410 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:07.983583 kubelet[2410]: I0117 12:09:07.983542 2410 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:09:07.983691 kubelet[2410]: I0117 12:09:07.983672 2410 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:09:07.984912 kubelet[2410]: I0117 12:09:07.983693 2410 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:09:07.984984 kubelet[2410]: I0117 12:09:07.984923 2410 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:09:07.984984 kubelet[2410]: I0117 12:09:07.984929 2410 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:09:07.985718 kubelet[2410]: I0117 12:09:07.985706 2410 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:07.986397 kubelet[2410]: I0117 12:09:07.986387 2410 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:09:07.986423 kubelet[2410]: I0117 12:09:07.986399 2410 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:09:07.986952 kubelet[2410]: I0117 12:09:07.986939 2410 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:09:07.986977 kubelet[2410]: I0117 12:09:07.986958 2410 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:09:07.989387 kubelet[2410]: W0117 12:09:07.989210 2410 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:07.989387 kubelet[2410]: E0117 12:09:07.989242 2410 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:07.989594 kubelet[2410]: W0117 12:09:07.989577 2410 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:07.989637 kubelet[2410]: E0117 12:09:07.989632 2410 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:07.989714 kubelet[2410]: I0117 12:09:07.989707 2410 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:09:07.990819 kubelet[2410]: I0117 12:09:07.990810 2410 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:09:07.991296 kubelet[2410]: W0117 12:09:07.990874 2410 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:09:07.991296 kubelet[2410]: I0117 12:09:07.991268 2410 server.go:1264] "Started kubelet" Jan 17 12:09:07.998886 kubelet[2410]: I0117 12:09:07.998876 2410 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:09:08.003509 kubelet[2410]: E0117 12:09:08.003448 2410 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b7991b4c16f8a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:09:07.991252874 +0000 UTC m=+0.349860030,LastTimestamp:2025-01-17 12:09:07.991252874 +0000 UTC m=+0.349860030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:09:08.004263 kubelet[2410]: I0117 12:09:08.004254 2410 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:09:08.005240 kubelet[2410]: I0117 12:09:08.005223 2410 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:09:08.005822 kubelet[2410]: I0117 12:09:08.005808 2410 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:09:08.006403 kubelet[2410]: I0117 12:09:08.006212 2410 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:09:08.006403 kubelet[2410]: I0117 12:09:08.006336 2410 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:09:08.006747 kubelet[2410]: I0117 12:09:08.006555 2410 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:09:08.006747 kubelet[2410]: I0117 12:09:08.006588 2410 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:09:08.007479 kubelet[2410]: W0117 12:09:08.007093 2410 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:08.007479 kubelet[2410]: E0117 12:09:08.007119 2410 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:08.007479 kubelet[2410]: E0117 12:09:08.007149 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" Jan 17 12:09:08.007479 kubelet[2410]: I0117 12:09:08.007297 2410 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:09:08.007479 kubelet[2410]: I0117 12:09:08.007333 2410 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:09:08.009130 kubelet[2410]: I0117 12:09:08.009122 2410 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:09:08.013644 kubelet[2410]: I0117 12:09:08.013618 2410 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:09:08.014763 kubelet[2410]: I0117 12:09:08.014748 2410 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:09:08.014800 kubelet[2410]: I0117 12:09:08.014764 2410 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:09:08.014800 kubelet[2410]: I0117 12:09:08.014777 2410 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:09:08.014842 kubelet[2410]: E0117 12:09:08.014801 2410 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:09:08.018761 kubelet[2410]: W0117 12:09:08.018731 2410 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:08.018811 kubelet[2410]: E0117 12:09:08.018771 2410 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:08.019096 kubelet[2410]: E0117 12:09:08.019080 2410 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:09:08.034099 kubelet[2410]: I0117 12:09:08.034084 2410 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:09:08.034099 kubelet[2410]: I0117 12:09:08.034092 2410 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:09:08.034178 kubelet[2410]: I0117 12:09:08.034106 2410 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:08.035400 kubelet[2410]: I0117 12:09:08.035387 2410 policy_none.go:49] "None policy: Start" Jan 17 12:09:08.035697 kubelet[2410]: I0117 12:09:08.035686 2410 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:09:08.035726 kubelet[2410]: I0117 12:09:08.035699 2410 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:09:08.039467 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:09:08.050653 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:09:08.052479 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:09:08.055706 kubelet[2410]: I0117 12:09:08.055692 2410 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:09:08.055954 kubelet[2410]: I0117 12:09:08.055932 2410 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:09:08.056000 kubelet[2410]: I0117 12:09:08.055991 2410 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:09:08.056893 kubelet[2410]: E0117 12:09:08.056877 2410 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:09:08.105308 kubelet[2410]: I0117 12:09:08.105268 2410 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:08.105564 kubelet[2410]: E0117 12:09:08.105544 2410 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jan 17 12:09:08.115837 kubelet[2410]: I0117 12:09:08.115777 2410 topology_manager.go:215] "Topology Admit Handler" podUID="694aeae7855852dc955709a7398cd61d" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:09:08.116436 kubelet[2410]: I0117 12:09:08.116422 2410 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:09:08.117621 kubelet[2410]: I0117 12:09:08.117350 2410 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:09:08.121801 systemd[1]: Created slice kubepods-burstable-pod694aeae7855852dc955709a7398cd61d.slice - libcontainer container kubepods-burstable-pod694aeae7855852dc955709a7398cd61d.slice. Jan 17 12:09:08.143017 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 17 12:09:08.146943 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 17 12:09:08.207626 kubelet[2410]: E0117 12:09:08.207541 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" Jan 17 12:09:08.207901 kubelet[2410]: I0117 12:09:08.207760 2410 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/694aeae7855852dc955709a7398cd61d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"694aeae7855852dc955709a7398cd61d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:08.207901 kubelet[2410]: I0117 12:09:08.207781 2410 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:08.207901 kubelet[2410]: I0117 12:09:08.207796 2410 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:08.207901 kubelet[2410]: I0117 12:09:08.207808 2410 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:08.207901 kubelet[2410]: I0117 12:09:08.207821 2410 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:09:08.208027 kubelet[2410]: I0117 12:09:08.207832 2410 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/694aeae7855852dc955709a7398cd61d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"694aeae7855852dc955709a7398cd61d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:08.208027 kubelet[2410]: I0117 12:09:08.207844 2410 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/694aeae7855852dc955709a7398cd61d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"694aeae7855852dc955709a7398cd61d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:08.208027 kubelet[2410]: I0117 12:09:08.207856 2410 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:08.208027 kubelet[2410]: I0117 12:09:08.207870 2410 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:08.307233 kubelet[2410]: I0117 12:09:08.307160 2410 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:08.307501 kubelet[2410]: E0117 12:09:08.307482 2410 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jan 17 12:09:08.442111 containerd[1550]: time="2025-01-17T12:09:08.442071529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:694aeae7855852dc955709a7398cd61d,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:08.452050 containerd[1550]: time="2025-01-17T12:09:08.451804792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:08.452050 containerd[1550]: time="2025-01-17T12:09:08.452044299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:08.608565 kubelet[2410]: E0117 12:09:08.608528 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" Jan 17 12:09:08.710784 kubelet[2410]: I0117 12:09:08.710720 2410 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:08.711246 kubelet[2410]: E0117 12:09:08.710926 2410 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jan 17 12:09:08.828942 kubelet[2410]: W0117 12:09:08.828871 2410 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:08.828942 kubelet[2410]: E0117 12:09:08.828914 2410 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:09.016639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663743282.mount: Deactivated successfully. Jan 17 12:09:09.084038 containerd[1550]: time="2025-01-17T12:09:09.083379245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:09.088004 containerd[1550]: time="2025-01-17T12:09:09.087972692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:09.092728 kubelet[2410]: W0117 12:09:09.092671 2410 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:09.092728 kubelet[2410]: E0117 12:09:09.092715 2410 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:09.095940 containerd[1550]: time="2025-01-17T12:09:09.095904342Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:09:09.102869 containerd[1550]: time="2025-01-17T12:09:09.102849019Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:09.106443 containerd[1550]: time="2025-01-17T12:09:09.106418804Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:09:09.108809 containerd[1550]: time="2025-01-17T12:09:09.108790818Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:09.113843 containerd[1550]: time="2025-01-17T12:09:09.113818722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:09:09.118938 containerd[1550]: time="2025-01-17T12:09:09.118910918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:09.119583 containerd[1550]: time="2025-01-17T12:09:09.119422484Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 667.574277ms" Jan 17 12:09:09.120668 containerd[1550]: time="2025-01-17T12:09:09.120650590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 678.501252ms" Jan 17 12:09:09.126965 containerd[1550]: time="2025-01-17T12:09:09.126916014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 674.840769ms" Jan 17 12:09:09.260395 kubelet[2410]: W0117 12:09:09.260371 2410 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:09.261136 kubelet[2410]: E0117 12:09:09.261102 2410 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:09.274044 containerd[1550]: time="2025-01-17T12:09:09.273906711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:09.275096 containerd[1550]: time="2025-01-17T12:09:09.275069460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:09.275327 containerd[1550]: time="2025-01-17T12:09:09.275237807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:09.275327 containerd[1550]: time="2025-01-17T12:09:09.275271680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:09.275327 containerd[1550]: time="2025-01-17T12:09:09.275082872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:09.275327 containerd[1550]: time="2025-01-17T12:09:09.275242564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:09.275477 containerd[1550]: time="2025-01-17T12:09:09.275333379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:09.275477 containerd[1550]: time="2025-01-17T12:09:09.275378104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:09.276953 containerd[1550]: time="2025-01-17T12:09:09.276787445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:09.276953 containerd[1550]: time="2025-01-17T12:09:09.276810590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:09.276953 containerd[1550]: time="2025-01-17T12:09:09.276828469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:09.276953 containerd[1550]: time="2025-01-17T12:09:09.276885567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:09.308380 systemd[1]: Started cri-containerd-92363d6c952a59d7ca9d1e74c81117081563c0804a7ee55f69d32f92ea0ba8cb.scope - libcontainer container 92363d6c952a59d7ca9d1e74c81117081563c0804a7ee55f69d32f92ea0ba8cb. Jan 17 12:09:09.309697 systemd[1]: Started cri-containerd-db75b2b46441cc14a838fa9bc3324e4f911e623f4d5ac7773a8914f5e16c5c03.scope - libcontainer container db75b2b46441cc14a838fa9bc3324e4f911e623f4d5ac7773a8914f5e16c5c03. Jan 17 12:09:09.313745 systemd[1]: Started cri-containerd-d68be7a81cb8a22663b4f70e0c34bfa75f8ac3872b7fe945a3bbf940af49626e.scope - libcontainer container d68be7a81cb8a22663b4f70e0c34bfa75f8ac3872b7fe945a3bbf940af49626e. Jan 17 12:09:09.358791 containerd[1550]: time="2025-01-17T12:09:09.358731445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:694aeae7855852dc955709a7398cd61d,Namespace:kube-system,Attempt:0,} returns sandbox id \"92363d6c952a59d7ca9d1e74c81117081563c0804a7ee55f69d32f92ea0ba8cb\"" Jan 17 12:09:09.371999 containerd[1550]: time="2025-01-17T12:09:09.371975913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"db75b2b46441cc14a838fa9bc3324e4f911e623f4d5ac7773a8914f5e16c5c03\"" Jan 17 12:09:09.382755 containerd[1550]: time="2025-01-17T12:09:09.382737741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d68be7a81cb8a22663b4f70e0c34bfa75f8ac3872b7fe945a3bbf940af49626e\"" Jan 17 12:09:09.397319 containerd[1550]: time="2025-01-17T12:09:09.397294011Z" level=info msg="CreateContainer within sandbox \"d68be7a81cb8a22663b4f70e0c34bfa75f8ac3872b7fe945a3bbf940af49626e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:09:09.397451 containerd[1550]: time="2025-01-17T12:09:09.397418591Z" level=info msg="CreateContainer within sandbox \"92363d6c952a59d7ca9d1e74c81117081563c0804a7ee55f69d32f92ea0ba8cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:09:09.409003 kubelet[2410]: E0117 12:09:09.408980 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" Jan 17 12:09:09.409158 containerd[1550]: time="2025-01-17T12:09:09.409119152Z" level=info msg="CreateContainer within sandbox \"db75b2b46441cc14a838fa9bc3324e4f911e623f4d5ac7773a8914f5e16c5c03\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:09:09.474386 kubelet[2410]: W0117 12:09:09.474341 2410 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:09.474516 kubelet[2410]: E0117 12:09:09.474502 2410 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:09.512678 kubelet[2410]: I0117 12:09:09.512664 2410 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:09.512973 kubelet[2410]: E0117 12:09:09.512956 2410 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jan 17 12:09:09.727508 containerd[1550]: time="2025-01-17T12:09:09.727429716Z" level=info msg="CreateContainer within sandbox \"db75b2b46441cc14a838fa9bc3324e4f911e623f4d5ac7773a8914f5e16c5c03\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b14a45b2c6fb133aec573ae0c9a0da852963a67f3cffdb9274e40081b99f4a54\"" Jan 17 12:09:09.728878 containerd[1550]: time="2025-01-17T12:09:09.728842419Z" level=info msg="StartContainer for \"b14a45b2c6fb133aec573ae0c9a0da852963a67f3cffdb9274e40081b99f4a54\"" Jan 17 12:09:09.740589 containerd[1550]: time="2025-01-17T12:09:09.740564571Z" level=info msg="CreateContainer within sandbox \"d68be7a81cb8a22663b4f70e0c34bfa75f8ac3872b7fe945a3bbf940af49626e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4b48531ecc3eb325b05481d8e0cd21e064a125d2875d3802a56adcb9f4dbae1\"" Jan 17 12:09:09.741031 containerd[1550]: time="2025-01-17T12:09:09.740955638Z" level=info msg="StartContainer for \"a4b48531ecc3eb325b05481d8e0cd21e064a125d2875d3802a56adcb9f4dbae1\"" Jan 17 12:09:09.750396 systemd[1]: Started cri-containerd-b14a45b2c6fb133aec573ae0c9a0da852963a67f3cffdb9274e40081b99f4a54.scope - libcontainer container b14a45b2c6fb133aec573ae0c9a0da852963a67f3cffdb9274e40081b99f4a54. Jan 17 12:09:09.755566 containerd[1550]: time="2025-01-17T12:09:09.755538547Z" level=info msg="CreateContainer within sandbox \"92363d6c952a59d7ca9d1e74c81117081563c0804a7ee55f69d32f92ea0ba8cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d2651987b87c5c2975f4bbdd439a87cb267d6d09698a3fdd052a54fd6933e1ba\"" Jan 17 12:09:09.756985 containerd[1550]: time="2025-01-17T12:09:09.756065728Z" level=info msg="StartContainer for \"d2651987b87c5c2975f4bbdd439a87cb267d6d09698a3fdd052a54fd6933e1ba\"" Jan 17 12:09:09.761390 systemd[1]: Started cri-containerd-a4b48531ecc3eb325b05481d8e0cd21e064a125d2875d3802a56adcb9f4dbae1.scope - libcontainer container a4b48531ecc3eb325b05481d8e0cd21e064a125d2875d3802a56adcb9f4dbae1. Jan 17 12:09:09.784401 systemd[1]: Started cri-containerd-d2651987b87c5c2975f4bbdd439a87cb267d6d09698a3fdd052a54fd6933e1ba.scope - libcontainer container d2651987b87c5c2975f4bbdd439a87cb267d6d09698a3fdd052a54fd6933e1ba. Jan 17 12:09:09.807022 containerd[1550]: time="2025-01-17T12:09:09.806993085Z" level=info msg="StartContainer for \"b14a45b2c6fb133aec573ae0c9a0da852963a67f3cffdb9274e40081b99f4a54\" returns successfully" Jan 17 12:09:09.807447 containerd[1550]: time="2025-01-17T12:09:09.807104433Z" level=info msg="StartContainer for \"a4b48531ecc3eb325b05481d8e0cd21e064a125d2875d3802a56adcb9f4dbae1\" returns successfully" Jan 17 12:09:09.831609 containerd[1550]: time="2025-01-17T12:09:09.831532531Z" level=info msg="StartContainer for \"d2651987b87c5c2975f4bbdd439a87cb267d6d09698a3fdd052a54fd6933e1ba\" returns successfully" Jan 17 12:09:10.015159 kubelet[2410]: E0117 12:09:10.015133 2410 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.108:6443: connect: connection refused Jan 17 12:09:11.045057 kubelet[2410]: E0117 12:09:11.045027 2410 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:09:11.115456 kubelet[2410]: I0117 12:09:11.115337 2410 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:11.126130 kubelet[2410]: I0117 12:09:11.126106 2410 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:09:11.131679 kubelet[2410]: E0117 12:09:11.131657 2410 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:09:11.232640 kubelet[2410]: E0117 12:09:11.232606 2410 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:09:11.333303 kubelet[2410]: E0117 12:09:11.333200 2410 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:09:11.433904 kubelet[2410]: E0117 12:09:11.433866 2410 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:09:11.534629 kubelet[2410]: E0117 12:09:11.534600 2410 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:09:11.993431 kubelet[2410]: I0117 12:09:11.993406 2410 apiserver.go:52] "Watching apiserver" Jan 17 12:09:12.007069 kubelet[2410]: I0117 12:09:12.007034 2410 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:09:12.659369 systemd[1]: Reloading requested from client PID 2686 ('systemctl') (unit session-9.scope)... Jan 17 12:09:12.659380 systemd[1]: Reloading... Jan 17 12:09:12.712318 zram_generator::config[2723]: No configuration found. Jan 17 12:09:12.772649 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 17 12:09:12.787396 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:12.835991 systemd[1]: Reloading finished in 176 ms. Jan 17 12:09:12.861389 kubelet[2410]: E0117 12:09:12.861236 2410 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.181b7991b4c16f8a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:09:07.991252874 +0000 UTC m=+0.349860030,LastTimestamp:2025-01-17 12:09:07.991252874 +0000 UTC m=+0.349860030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:09:12.861475 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:12.874926 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:09:12.875060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:12.879490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:13.084004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:13.090651 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:09:13.157019 kubelet[2790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:13.157019 kubelet[2790]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:09:13.157019 kubelet[2790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:13.157242 kubelet[2790]: I0117 12:09:13.157059 2790 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:09:13.159918 kubelet[2790]: I0117 12:09:13.159892 2790 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:09:13.159918 kubelet[2790]: I0117 12:09:13.159906 2790 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:09:13.162298 kubelet[2790]: I0117 12:09:13.160838 2790 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:09:13.162298 kubelet[2790]: I0117 12:09:13.161603 2790 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:09:13.162600 kubelet[2790]: I0117 12:09:13.162593 2790 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:09:13.175786 kubelet[2790]: I0117 12:09:13.175769 2790 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:09:13.191216 kubelet[2790]: I0117 12:09:13.191189 2790 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:09:13.191504 kubelet[2790]: I0117 12:09:13.191216 2790 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:09:13.191562 kubelet[2790]: I0117 12:09:13.191514 2790 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:09:13.191562 kubelet[2790]: I0117 12:09:13.191523 2790 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:09:13.191562 kubelet[2790]: I0117 12:09:13.191550 2790 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:13.191619 kubelet[2790]: I0117 12:09:13.191609 2790 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:09:13.191619 kubelet[2790]: I0117 12:09:13.191616 2790 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:09:13.191650 kubelet[2790]: I0117 12:09:13.191628 2790 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:09:13.191650 kubelet[2790]: I0117 12:09:13.191640 2790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:09:13.199756 kubelet[2790]: I0117 12:09:13.199740 2790 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:09:13.199926 kubelet[2790]: I0117 12:09:13.199919 2790 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:09:13.213787 kubelet[2790]: I0117 12:09:13.212897 2790 server.go:1264] "Started kubelet" Jan 17 12:09:13.216055 kubelet[2790]: I0117 12:09:13.214905 2790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:09:13.219901 kubelet[2790]: I0117 12:09:13.219798 2790 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:09:13.220482 kubelet[2790]: I0117 12:09:13.220469 2790 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:09:13.221765 kubelet[2790]: I0117 12:09:13.221736 2790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:09:13.221853 kubelet[2790]: I0117 12:09:13.221843 2790 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:09:13.222258 kubelet[2790]: I0117 12:09:13.222249 2790 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:09:13.222319 kubelet[2790]: I0117 12:09:13.222310 2790 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:09:13.222381 kubelet[2790]: I0117 12:09:13.222372 2790 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:09:13.225353 kubelet[2790]: I0117 12:09:13.224953 2790 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:09:13.225353 kubelet[2790]: I0117 12:09:13.225011 2790 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:09:13.226322 kubelet[2790]: I0117 12:09:13.225650 2790 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:09:13.228393 kubelet[2790]: I0117 12:09:13.228378 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:09:13.228963 kubelet[2790]: I0117 12:09:13.228948 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:09:13.228963 kubelet[2790]: I0117 12:09:13.228965 2790 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:09:13.229009 kubelet[2790]: I0117 12:09:13.228975 2790 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:09:13.229009 kubelet[2790]: E0117 12:09:13.228995 2790 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:09:13.259890 kubelet[2790]: I0117 12:09:13.259859 2790 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:09:13.260169 kubelet[2790]: I0117 12:09:13.259986 2790 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:09:13.260169 kubelet[2790]: I0117 12:09:13.259999 2790 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:13.260169 kubelet[2790]: I0117 12:09:13.260085 2790 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:09:13.260169 kubelet[2790]: I0117 12:09:13.260091 2790 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:09:13.260169 kubelet[2790]: I0117 12:09:13.260102 2790 policy_none.go:49] "None policy: Start" Jan 17 12:09:13.260617 kubelet[2790]: I0117 12:09:13.260461 2790 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:09:13.260617 kubelet[2790]: I0117 12:09:13.260472 2790 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:09:13.260694 kubelet[2790]: I0117 12:09:13.260606 2790 state_mem.go:75] "Updated machine memory state" Jan 17 12:09:13.263050 kubelet[2790]: I0117 12:09:13.263004 2790 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:09:13.263118 kubelet[2790]: I0117 12:09:13.263097 2790 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:09:13.263764 kubelet[2790]: I0117 12:09:13.263690 2790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:09:13.324391 kubelet[2790]: I0117 12:09:13.324287 2790 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:13.328188 kubelet[2790]: I0117 12:09:13.328143 2790 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:09:13.328746 kubelet[2790]: I0117 12:09:13.328677 2790 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:09:13.329529 kubelet[2790]: I0117 12:09:13.329348 2790 topology_manager.go:215] "Topology Admit Handler" podUID="694aeae7855852dc955709a7398cd61d" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:09:13.329529 kubelet[2790]: I0117 12:09:13.329414 2790 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:09:13.329529 kubelet[2790]: I0117 12:09:13.329454 2790 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:09:13.424369 kubelet[2790]: I0117 12:09:13.424194 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:13.424369 kubelet[2790]: I0117 12:09:13.424235 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/694aeae7855852dc955709a7398cd61d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"694aeae7855852dc955709a7398cd61d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:13.424369 kubelet[2790]: I0117 12:09:13.424252 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:13.424369 kubelet[2790]: I0117 12:09:13.424265 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:13.424369 kubelet[2790]: I0117 12:09:13.424277 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:13.424555 kubelet[2790]: I0117 12:09:13.424329 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/694aeae7855852dc955709a7398cd61d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"694aeae7855852dc955709a7398cd61d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:13.424555 kubelet[2790]: I0117 12:09:13.424357 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/694aeae7855852dc955709a7398cd61d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"694aeae7855852dc955709a7398cd61d\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:13.424555 kubelet[2790]: I0117 12:09:13.424375 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:13.424555 kubelet[2790]: I0117 12:09:13.424393 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:09:14.192767 kubelet[2790]: I0117 12:09:14.192739 2790 apiserver.go:52] "Watching apiserver" Jan 17 12:09:14.223163 kubelet[2790]: I0117 12:09:14.223141 2790 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:09:14.300888 kubelet[2790]: I0117 12:09:14.300846 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3008337970000001 podStartE2EDuration="1.300833797s" podCreationTimestamp="2025-01-17 12:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:09:14.290080692 +0000 UTC m=+1.180995605" watchObservedRunningTime="2025-01-17 12:09:14.300833797 +0000 UTC m=+1.191748705" Jan 17 12:09:14.314300 kubelet[2790]: I0117 12:09:14.314222 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3142099919999999 podStartE2EDuration="1.314209992s" podCreationTimestamp="2025-01-17 12:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:09:14.301380538 +0000 UTC m=+1.192295451" watchObservedRunningTime="2025-01-17 12:09:14.314209992 +0000 UTC m=+1.205124906" Jan 17 12:09:14.341890 kubelet[2790]: I0117 12:09:14.341782 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.341768997 podStartE2EDuration="1.341768997s" podCreationTimestamp="2025-01-17 12:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:09:14.315799193 +0000 UTC m=+1.206714106" watchObservedRunningTime="2025-01-17 12:09:14.341768997 +0000 UTC m=+1.232683910" Jan 17 12:09:17.423329 sudo[1865]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:17.425541 sshd[1862]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:17.427111 systemd[1]: sshd@6-139.178.70.108:22-147.75.109.163:57058.service: Deactivated successfully. Jan 17 12:09:17.428538 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:09:17.428679 systemd[1]: session-9.scope: Consumed 3.281s CPU time, 188.8M memory peak, 0B memory swap peak. Jan 17 12:09:17.429412 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:09:17.429981 systemd-logind[1532]: Removed session 9. Jan 17 12:09:27.598815 kubelet[2790]: I0117 12:09:27.598012 2790 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:09:27.599127 containerd[1550]: time="2025-01-17T12:09:27.598740561Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:09:27.600855 kubelet[2790]: I0117 12:09:27.600113 2790 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:09:28.475261 kubelet[2790]: I0117 12:09:28.475136 2790 topology_manager.go:215] "Topology Admit Handler" podUID="e51ad567-eb9e-4674-b075-73abba1b07bf" podNamespace="kube-system" podName="kube-proxy-v5zmx" Jan 17 12:09:28.484560 systemd[1]: Created slice kubepods-besteffort-pode51ad567_eb9e_4674_b075_73abba1b07bf.slice - libcontainer container kubepods-besteffort-pode51ad567_eb9e_4674_b075_73abba1b07bf.slice. Jan 17 12:09:28.524509 kubelet[2790]: I0117 12:09:28.524403 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e51ad567-eb9e-4674-b075-73abba1b07bf-kube-proxy\") pod \"kube-proxy-v5zmx\" (UID: \"e51ad567-eb9e-4674-b075-73abba1b07bf\") " pod="kube-system/kube-proxy-v5zmx" Jan 17 12:09:28.524509 kubelet[2790]: I0117 12:09:28.524441 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e51ad567-eb9e-4674-b075-73abba1b07bf-lib-modules\") pod \"kube-proxy-v5zmx\" (UID: \"e51ad567-eb9e-4674-b075-73abba1b07bf\") " pod="kube-system/kube-proxy-v5zmx" Jan 17 12:09:28.524509 kubelet[2790]: I0117 12:09:28.524455 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2668r\" (UniqueName: \"kubernetes.io/projected/e51ad567-eb9e-4674-b075-73abba1b07bf-kube-api-access-2668r\") pod \"kube-proxy-v5zmx\" (UID: \"e51ad567-eb9e-4674-b075-73abba1b07bf\") " pod="kube-system/kube-proxy-v5zmx" Jan 17 12:09:28.524509 kubelet[2790]: I0117 12:09:28.524473 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e51ad567-eb9e-4674-b075-73abba1b07bf-xtables-lock\") pod \"kube-proxy-v5zmx\" (UID: \"e51ad567-eb9e-4674-b075-73abba1b07bf\") " pod="kube-system/kube-proxy-v5zmx" Jan 17 12:09:28.602523 kubelet[2790]: I0117 12:09:28.602405 2790 topology_manager.go:215] "Topology Admit Handler" podUID="3e141beb-da8e-4a98-9cd2-3fd9e4885861" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-2955j" Jan 17 12:09:28.608333 systemd[1]: Created slice kubepods-besteffort-pod3e141beb_da8e_4a98_9cd2_3fd9e4885861.slice - libcontainer container kubepods-besteffort-pod3e141beb_da8e_4a98_9cd2_3fd9e4885861.slice. Jan 17 12:09:28.625062 kubelet[2790]: I0117 12:09:28.625041 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjzr\" (UniqueName: \"kubernetes.io/projected/3e141beb-da8e-4a98-9cd2-3fd9e4885861-kube-api-access-qcjzr\") pod \"tigera-operator-7bc55997bb-2955j\" (UID: \"3e141beb-da8e-4a98-9cd2-3fd9e4885861\") " pod="tigera-operator/tigera-operator-7bc55997bb-2955j" Jan 17 12:09:28.625363 kubelet[2790]: I0117 12:09:28.625210 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e141beb-da8e-4a98-9cd2-3fd9e4885861-var-lib-calico\") pod \"tigera-operator-7bc55997bb-2955j\" (UID: \"3e141beb-da8e-4a98-9cd2-3fd9e4885861\") " pod="tigera-operator/tigera-operator-7bc55997bb-2955j" Jan 17 12:09:28.793550 containerd[1550]: time="2025-01-17T12:09:28.793458721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v5zmx,Uid:e51ad567-eb9e-4674-b075-73abba1b07bf,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:28.910601 containerd[1550]: time="2025-01-17T12:09:28.910326796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-2955j,Uid:3e141beb-da8e-4a98-9cd2-3fd9e4885861,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:09:28.922029 containerd[1550]: time="2025-01-17T12:09:28.921964482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:28.922029 containerd[1550]: time="2025-01-17T12:09:28.922010066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:28.922195 containerd[1550]: time="2025-01-17T12:09:28.922018224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:28.922195 containerd[1550]: time="2025-01-17T12:09:28.922066330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:28.939387 systemd[1]: Started cri-containerd-f7ad2d4b1246fab5be786db780144a26b22efad9f0f8132f89f1acea45cc1540.scope - libcontainer container f7ad2d4b1246fab5be786db780144a26b22efad9f0f8132f89f1acea45cc1540. Jan 17 12:09:28.952692 containerd[1550]: time="2025-01-17T12:09:28.952658623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v5zmx,Uid:e51ad567-eb9e-4674-b075-73abba1b07bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7ad2d4b1246fab5be786db780144a26b22efad9f0f8132f89f1acea45cc1540\"" Jan 17 12:09:28.954779 containerd[1550]: time="2025-01-17T12:09:28.954691146Z" level=info msg="CreateContainer within sandbox \"f7ad2d4b1246fab5be786db780144a26b22efad9f0f8132f89f1acea45cc1540\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:09:29.055989 containerd[1550]: time="2025-01-17T12:09:29.055879441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:29.055989 containerd[1550]: time="2025-01-17T12:09:29.055913241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:29.055989 containerd[1550]: time="2025-01-17T12:09:29.055920447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:29.056276 containerd[1550]: time="2025-01-17T12:09:29.055972771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:29.067388 systemd[1]: Started cri-containerd-4fc375bf84017020063d5de152eec32c4e81beb2bb0489239b5ebd990ab53be2.scope - libcontainer container 4fc375bf84017020063d5de152eec32c4e81beb2bb0489239b5ebd990ab53be2. Jan 17 12:09:29.098114 containerd[1550]: time="2025-01-17T12:09:29.097980551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-2955j,Uid:3e141beb-da8e-4a98-9cd2-3fd9e4885861,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4fc375bf84017020063d5de152eec32c4e81beb2bb0489239b5ebd990ab53be2\"" Jan 17 12:09:29.100084 containerd[1550]: time="2025-01-17T12:09:29.099971411Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:09:29.106053 containerd[1550]: time="2025-01-17T12:09:29.106020455Z" level=info msg="CreateContainer within sandbox \"f7ad2d4b1246fab5be786db780144a26b22efad9f0f8132f89f1acea45cc1540\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07504380db4300110b0c97bd9d14436e4c96b95b7d6af0b465fda9d27a565df8\"" Jan 17 12:09:29.106693 containerd[1550]: time="2025-01-17T12:09:29.106677785Z" level=info msg="StartContainer for \"07504380db4300110b0c97bd9d14436e4c96b95b7d6af0b465fda9d27a565df8\"" Jan 17 12:09:29.129461 systemd[1]: Started cri-containerd-07504380db4300110b0c97bd9d14436e4c96b95b7d6af0b465fda9d27a565df8.scope - libcontainer container 07504380db4300110b0c97bd9d14436e4c96b95b7d6af0b465fda9d27a565df8. Jan 17 12:09:29.155195 containerd[1550]: time="2025-01-17T12:09:29.155103040Z" level=info msg="StartContainer for \"07504380db4300110b0c97bd9d14436e4c96b95b7d6af0b465fda9d27a565df8\" returns successfully" Jan 17 12:09:32.882398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767266842.mount: Deactivated successfully. Jan 17 12:09:33.209236 containerd[1550]: time="2025-01-17T12:09:33.209151560Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:33.209673 containerd[1550]: time="2025-01-17T12:09:33.209612667Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764297" Jan 17 12:09:33.210046 containerd[1550]: time="2025-01-17T12:09:33.209927028Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:33.211183 containerd[1550]: time="2025-01-17T12:09:33.211168603Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:33.211636 containerd[1550]: time="2025-01-17T12:09:33.211619193Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.111615751s" Jan 17 12:09:33.211668 containerd[1550]: time="2025-01-17T12:09:33.211637796Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:09:33.228401 containerd[1550]: time="2025-01-17T12:09:33.228373537Z" level=info msg="CreateContainer within sandbox \"4fc375bf84017020063d5de152eec32c4e81beb2bb0489239b5ebd990ab53be2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:09:33.243557 containerd[1550]: time="2025-01-17T12:09:33.243489487Z" level=info msg="CreateContainer within sandbox \"4fc375bf84017020063d5de152eec32c4e81beb2bb0489239b5ebd990ab53be2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0fed0ed620dd373ab7cf15f662cd3f6bd0053aaac42f432fee34234e70144632\"" Jan 17 12:09:33.245160 containerd[1550]: time="2025-01-17T12:09:33.244272939Z" level=info msg="StartContainer for \"0fed0ed620dd373ab7cf15f662cd3f6bd0053aaac42f432fee34234e70144632\"" Jan 17 12:09:33.254994 kubelet[2790]: I0117 12:09:33.251168 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v5zmx" podStartSLOduration=5.2511565430000005 podStartE2EDuration="5.251156543s" podCreationTimestamp="2025-01-17 12:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:09:29.275024292 +0000 UTC m=+16.165939200" watchObservedRunningTime="2025-01-17 12:09:33.251156543 +0000 UTC m=+20.142071453" Jan 17 12:09:33.282394 systemd[1]: Started cri-containerd-0fed0ed620dd373ab7cf15f662cd3f6bd0053aaac42f432fee34234e70144632.scope - libcontainer container 0fed0ed620dd373ab7cf15f662cd3f6bd0053aaac42f432fee34234e70144632. Jan 17 12:09:33.299786 containerd[1550]: time="2025-01-17T12:09:33.299755510Z" level=info msg="StartContainer for \"0fed0ed620dd373ab7cf15f662cd3f6bd0053aaac42f432fee34234e70144632\" returns successfully" Jan 17 12:09:36.405180 kubelet[2790]: I0117 12:09:36.405144 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-2955j" podStartSLOduration=4.277370108 podStartE2EDuration="8.405131869s" podCreationTimestamp="2025-01-17 12:09:28 +0000 UTC" firstStartedPulling="2025-01-17 12:09:29.099731862 +0000 UTC m=+15.990646769" lastFinishedPulling="2025-01-17 12:09:33.227493615 +0000 UTC m=+20.118408530" observedRunningTime="2025-01-17 12:09:34.290319779 +0000 UTC m=+21.181234692" watchObservedRunningTime="2025-01-17 12:09:36.405131869 +0000 UTC m=+23.296046778" Jan 17 12:09:36.405484 kubelet[2790]: I0117 12:09:36.405225 2790 topology_manager.go:215] "Topology Admit Handler" podUID="a0a39ce9-07ac-4aea-8088-df013e7c715c" podNamespace="calico-system" podName="calico-typha-78b7959db9-r2pkv" Jan 17 12:09:36.432802 systemd[1]: Created slice kubepods-besteffort-poda0a39ce9_07ac_4aea_8088_df013e7c715c.slice - libcontainer container kubepods-besteffort-poda0a39ce9_07ac_4aea_8088_df013e7c715c.slice. Jan 17 12:09:36.483261 kubelet[2790]: I0117 12:09:36.483229 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a0a39ce9-07ac-4aea-8088-df013e7c715c-typha-certs\") pod \"calico-typha-78b7959db9-r2pkv\" (UID: \"a0a39ce9-07ac-4aea-8088-df013e7c715c\") " pod="calico-system/calico-typha-78b7959db9-r2pkv" Jan 17 12:09:36.483261 kubelet[2790]: I0117 12:09:36.483256 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb27m\" (UniqueName: \"kubernetes.io/projected/a0a39ce9-07ac-4aea-8088-df013e7c715c-kube-api-access-fb27m\") pod \"calico-typha-78b7959db9-r2pkv\" (UID: \"a0a39ce9-07ac-4aea-8088-df013e7c715c\") " pod="calico-system/calico-typha-78b7959db9-r2pkv" Jan 17 12:09:36.483261 kubelet[2790]: I0117 12:09:36.483270 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0a39ce9-07ac-4aea-8088-df013e7c715c-tigera-ca-bundle\") pod \"calico-typha-78b7959db9-r2pkv\" (UID: \"a0a39ce9-07ac-4aea-8088-df013e7c715c\") " pod="calico-system/calico-typha-78b7959db9-r2pkv" Jan 17 12:09:36.515613 kubelet[2790]: I0117 12:09:36.515579 2790 topology_manager.go:215] "Topology Admit Handler" podUID="99ada950-1331-4101-b65b-7184bd36b67d" podNamespace="calico-system" podName="calico-node-znvlg" Jan 17 12:09:36.519674 systemd[1]: Created slice kubepods-besteffort-pod99ada950_1331_4101_b65b_7184bd36b67d.slice - libcontainer container kubepods-besteffort-pod99ada950_1331_4101_b65b_7184bd36b67d.slice. Jan 17 12:09:36.584348 kubelet[2790]: I0117 12:09:36.584097 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n89rj\" (UniqueName: \"kubernetes.io/projected/99ada950-1331-4101-b65b-7184bd36b67d-kube-api-access-n89rj\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584348 kubelet[2790]: I0117 12:09:36.584131 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99ada950-1331-4101-b65b-7184bd36b67d-lib-modules\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584348 kubelet[2790]: I0117 12:09:36.584145 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99ada950-1331-4101-b65b-7184bd36b67d-tigera-ca-bundle\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584348 kubelet[2790]: I0117 12:09:36.584153 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/99ada950-1331-4101-b65b-7184bd36b67d-var-run-calico\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584348 kubelet[2790]: I0117 12:09:36.584163 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/99ada950-1331-4101-b65b-7184bd36b67d-cni-bin-dir\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584507 kubelet[2790]: I0117 12:09:36.584175 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99ada950-1331-4101-b65b-7184bd36b67d-xtables-lock\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584507 kubelet[2790]: I0117 12:09:36.584195 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/99ada950-1331-4101-b65b-7184bd36b67d-flexvol-driver-host\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584507 kubelet[2790]: I0117 12:09:36.584206 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/99ada950-1331-4101-b65b-7184bd36b67d-node-certs\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584507 kubelet[2790]: I0117 12:09:36.584214 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/99ada950-1331-4101-b65b-7184bd36b67d-policysync\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584507 kubelet[2790]: I0117 12:09:36.584224 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99ada950-1331-4101-b65b-7184bd36b67d-var-lib-calico\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584588 kubelet[2790]: I0117 12:09:36.584233 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/99ada950-1331-4101-b65b-7184bd36b67d-cni-net-dir\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.584588 kubelet[2790]: I0117 12:09:36.584243 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/99ada950-1331-4101-b65b-7184bd36b67d-cni-log-dir\") pod \"calico-node-znvlg\" (UID: \"99ada950-1331-4101-b65b-7184bd36b67d\") " pod="calico-system/calico-node-znvlg" Jan 17 12:09:36.633765 kubelet[2790]: I0117 12:09:36.633737 2790 topology_manager.go:215] "Topology Admit Handler" podUID="e85f6860-3f73-4ce2-9930-90bfeb23aed3" podNamespace="calico-system" podName="csi-node-driver-w95gg" Jan 17 12:09:36.633923 kubelet[2790]: E0117 12:09:36.633908 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w95gg" podUID="e85f6860-3f73-4ce2-9930-90bfeb23aed3" Jan 17 12:09:36.684977 kubelet[2790]: I0117 12:09:36.684571 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e85f6860-3f73-4ce2-9930-90bfeb23aed3-socket-dir\") pod \"csi-node-driver-w95gg\" (UID: \"e85f6860-3f73-4ce2-9930-90bfeb23aed3\") " pod="calico-system/csi-node-driver-w95gg" Jan 17 12:09:36.687761 kubelet[2790]: I0117 12:09:36.685064 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e85f6860-3f73-4ce2-9930-90bfeb23aed3-kubelet-dir\") pod \"csi-node-driver-w95gg\" (UID: \"e85f6860-3f73-4ce2-9930-90bfeb23aed3\") " pod="calico-system/csi-node-driver-w95gg" Jan 17 12:09:36.687761 kubelet[2790]: I0117 12:09:36.687041 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e85f6860-3f73-4ce2-9930-90bfeb23aed3-registration-dir\") pod \"csi-node-driver-w95gg\" (UID: \"e85f6860-3f73-4ce2-9930-90bfeb23aed3\") " pod="calico-system/csi-node-driver-w95gg" Jan 17 12:09:36.687761 kubelet[2790]: I0117 12:09:36.687057 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e85f6860-3f73-4ce2-9930-90bfeb23aed3-varrun\") pod \"csi-node-driver-w95gg\" (UID: \"e85f6860-3f73-4ce2-9930-90bfeb23aed3\") " pod="calico-system/csi-node-driver-w95gg" Jan 17 12:09:36.687761 kubelet[2790]: I0117 12:09:36.687068 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znzjg\" (UniqueName: \"kubernetes.io/projected/e85f6860-3f73-4ce2-9930-90bfeb23aed3-kube-api-access-znzjg\") pod \"csi-node-driver-w95gg\" (UID: \"e85f6860-3f73-4ce2-9930-90bfeb23aed3\") " pod="calico-system/csi-node-driver-w95gg" Jan 17 12:09:36.787022 containerd[1550]: time="2025-01-17T12:09:36.786871597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b7959db9-r2pkv,Uid:a0a39ce9-07ac-4aea-8088-df013e7c715c,Namespace:calico-system,Attempt:0,}" Jan 17 12:09:36.788186 kubelet[2790]: E0117 12:09:36.788086 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.788186 kubelet[2790]: W0117 12:09:36.788099 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.797115 kubelet[2790]: E0117 12:09:36.796970 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.797335 kubelet[2790]: E0117 12:09:36.797326 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.797448 kubelet[2790]: W0117 12:09:36.797377 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.797448 kubelet[2790]: E0117 12:09:36.797396 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.797566 kubelet[2790]: E0117 12:09:36.797558 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.797626 kubelet[2790]: W0117 12:09:36.797597 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.797626 kubelet[2790]: E0117 12:09:36.797605 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.797971 kubelet[2790]: E0117 12:09:36.797727 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.797971 kubelet[2790]: W0117 12:09:36.797738 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.797971 kubelet[2790]: E0117 12:09:36.797763 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.797971 kubelet[2790]: E0117 12:09:36.797953 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.797971 kubelet[2790]: W0117 12:09:36.797958 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.797971 kubelet[2790]: E0117 12:09:36.797964 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.798172 kubelet[2790]: E0117 12:09:36.798161 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.798172 kubelet[2790]: W0117 12:09:36.798169 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.798331 kubelet[2790]: E0117 12:09:36.798176 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.798829 kubelet[2790]: E0117 12:09:36.798734 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.798829 kubelet[2790]: W0117 12:09:36.798741 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.798829 kubelet[2790]: E0117 12:09:36.798748 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.798959 kubelet[2790]: E0117 12:09:36.798908 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.798959 kubelet[2790]: W0117 12:09:36.798914 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.798959 kubelet[2790]: E0117 12:09:36.798919 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.800294 kubelet[2790]: E0117 12:09:36.799159 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.800294 kubelet[2790]: W0117 12:09:36.799166 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.800294 kubelet[2790]: E0117 12:09:36.799172 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.800533 kubelet[2790]: E0117 12:09:36.800459 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.800533 kubelet[2790]: W0117 12:09:36.800469 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.800533 kubelet[2790]: E0117 12:09:36.800481 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.800772 kubelet[2790]: E0117 12:09:36.800704 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.800772 kubelet[2790]: W0117 12:09:36.800713 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.800772 kubelet[2790]: E0117 12:09:36.800721 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.801036 kubelet[2790]: E0117 12:09:36.800928 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.801036 kubelet[2790]: W0117 12:09:36.800935 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.801036 kubelet[2790]: E0117 12:09:36.800941 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.801321 kubelet[2790]: E0117 12:09:36.801125 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.801321 kubelet[2790]: W0117 12:09:36.801131 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.801321 kubelet[2790]: E0117 12:09:36.801137 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.801454 kubelet[2790]: E0117 12:09:36.801447 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.801502 kubelet[2790]: W0117 12:09:36.801495 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.801612 kubelet[2790]: E0117 12:09:36.801542 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.801859 kubelet[2790]: E0117 12:09:36.801804 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.801859 kubelet[2790]: W0117 12:09:36.801817 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.801859 kubelet[2790]: E0117 12:09:36.801825 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.802226 kubelet[2790]: E0117 12:09:36.802064 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.802226 kubelet[2790]: W0117 12:09:36.802072 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.802226 kubelet[2790]: E0117 12:09:36.802080 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.802547 kubelet[2790]: E0117 12:09:36.802474 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.802547 kubelet[2790]: W0117 12:09:36.802481 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.802547 kubelet[2790]: E0117 12:09:36.802487 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.803601 kubelet[2790]: E0117 12:09:36.802821 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.803601 kubelet[2790]: W0117 12:09:36.802828 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.803601 kubelet[2790]: E0117 12:09:36.802837 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.803601 kubelet[2790]: E0117 12:09:36.803501 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.803601 kubelet[2790]: W0117 12:09:36.803508 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.803601 kubelet[2790]: E0117 12:09:36.803516 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.804398 kubelet[2790]: E0117 12:09:36.804342 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.804398 kubelet[2790]: W0117 12:09:36.804350 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.804398 kubelet[2790]: E0117 12:09:36.804357 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.804525 kubelet[2790]: E0117 12:09:36.804519 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.804609 kubelet[2790]: W0117 12:09:36.804558 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.804609 kubelet[2790]: E0117 12:09:36.804566 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.804706 kubelet[2790]: E0117 12:09:36.804700 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.804738 kubelet[2790]: W0117 12:09:36.804733 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.804828 kubelet[2790]: E0117 12:09:36.804761 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.804916 kubelet[2790]: E0117 12:09:36.804910 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.804951 kubelet[2790]: W0117 12:09:36.804946 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.804983 kubelet[2790]: E0117 12:09:36.804978 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.805220 kubelet[2790]: E0117 12:09:36.805085 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.805220 kubelet[2790]: W0117 12:09:36.805091 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.805220 kubelet[2790]: E0117 12:09:36.805096 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.807125 kubelet[2790]: E0117 12:09:36.807053 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.807125 kubelet[2790]: W0117 12:09:36.807060 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.807125 kubelet[2790]: E0117 12:09:36.807069 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.807330 kubelet[2790]: E0117 12:09:36.807265 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.807330 kubelet[2790]: W0117 12:09:36.807272 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.807395 kubelet[2790]: E0117 12:09:36.807381 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.807664 kubelet[2790]: E0117 12:09:36.807639 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.807664 kubelet[2790]: W0117 12:09:36.807645 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.807664 kubelet[2790]: E0117 12:09:36.807651 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.824061 containerd[1550]: time="2025-01-17T12:09:36.823819742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-znvlg,Uid:99ada950-1331-4101-b65b-7184bd36b67d,Namespace:calico-system,Attempt:0,}" Jan 17 12:09:36.831342 kubelet[2790]: E0117 12:09:36.831295 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:36.831342 kubelet[2790]: W0117 12:09:36.831307 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:36.831342 kubelet[2790]: E0117 12:09:36.831319 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:36.872136 containerd[1550]: time="2025-01-17T12:09:36.871911409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:36.872136 containerd[1550]: time="2025-01-17T12:09:36.871977491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:36.872136 containerd[1550]: time="2025-01-17T12:09:36.871988857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:36.872136 containerd[1550]: time="2025-01-17T12:09:36.872080880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:36.889420 systemd[1]: Started cri-containerd-ab5110e809cd7430fd0accfc3ee608446928ad65afb59878b47ae0128920f075.scope - libcontainer container ab5110e809cd7430fd0accfc3ee608446928ad65afb59878b47ae0128920f075. Jan 17 12:09:36.915302 containerd[1550]: time="2025-01-17T12:09:36.914957499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:36.916314 containerd[1550]: time="2025-01-17T12:09:36.915401268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:36.916314 containerd[1550]: time="2025-01-17T12:09:36.915426875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:36.916314 containerd[1550]: time="2025-01-17T12:09:36.915496028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:36.931396 systemd[1]: Started cri-containerd-c84ed212416e292ef4cd3933cbd5ecd1faf1f0478495710d8cbd4583c3f52843.scope - libcontainer container c84ed212416e292ef4cd3933cbd5ecd1faf1f0478495710d8cbd4583c3f52843. Jan 17 12:09:36.940223 containerd[1550]: time="2025-01-17T12:09:36.939989413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b7959db9-r2pkv,Uid:a0a39ce9-07ac-4aea-8088-df013e7c715c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab5110e809cd7430fd0accfc3ee608446928ad65afb59878b47ae0128920f075\"" Jan 17 12:09:36.942657 containerd[1550]: time="2025-01-17T12:09:36.942634093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:09:36.952290 containerd[1550]: time="2025-01-17T12:09:36.952218426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-znvlg,Uid:99ada950-1331-4101-b65b-7184bd36b67d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c84ed212416e292ef4cd3933cbd5ecd1faf1f0478495710d8cbd4583c3f52843\"" Jan 17 12:09:38.230278 kubelet[2790]: E0117 12:09:38.230211 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w95gg" podUID="e85f6860-3f73-4ce2-9930-90bfeb23aed3" Jan 17 12:09:38.861121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount384679588.mount: Deactivated successfully. Jan 17 12:09:39.372343 containerd[1550]: time="2025-01-17T12:09:39.372093085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:39.378941 containerd[1550]: time="2025-01-17T12:09:39.378890180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:09:39.385743 containerd[1550]: time="2025-01-17T12:09:39.385693213Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:39.584324 containerd[1550]: time="2025-01-17T12:09:39.583879999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:39.584419 containerd[1550]: time="2025-01-17T12:09:39.584409209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.641737707s" Jan 17 12:09:39.584445 containerd[1550]: time="2025-01-17T12:09:39.584424832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:09:39.585668 containerd[1550]: time="2025-01-17T12:09:39.585402079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:09:39.601524 containerd[1550]: time="2025-01-17T12:09:39.601490056Z" level=info msg="CreateContainer within sandbox \"ab5110e809cd7430fd0accfc3ee608446928ad65afb59878b47ae0128920f075\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:09:39.653870 containerd[1550]: time="2025-01-17T12:09:39.653769481Z" level=info msg="CreateContainer within sandbox \"ab5110e809cd7430fd0accfc3ee608446928ad65afb59878b47ae0128920f075\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e922581235e0999d2a8e7488ac03ce34f7fd29d2991d29dba896ce92e2331ed5\"" Jan 17 12:09:39.654512 containerd[1550]: time="2025-01-17T12:09:39.654347126Z" level=info msg="StartContainer for \"e922581235e0999d2a8e7488ac03ce34f7fd29d2991d29dba896ce92e2331ed5\"" Jan 17 12:09:39.677439 systemd[1]: Started cri-containerd-e922581235e0999d2a8e7488ac03ce34f7fd29d2991d29dba896ce92e2331ed5.scope - libcontainer container e922581235e0999d2a8e7488ac03ce34f7fd29d2991d29dba896ce92e2331ed5. Jan 17 12:09:39.719438 containerd[1550]: time="2025-01-17T12:09:39.719388513Z" level=info msg="StartContainer for \"e922581235e0999d2a8e7488ac03ce34f7fd29d2991d29dba896ce92e2331ed5\" returns successfully" Jan 17 12:09:40.230061 kubelet[2790]: E0117 12:09:40.230001 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w95gg" podUID="e85f6860-3f73-4ce2-9930-90bfeb23aed3" Jan 17 12:09:40.389772 kubelet[2790]: E0117 12:09:40.389667 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.389772 kubelet[2790]: W0117 12:09:40.389691 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.389772 kubelet[2790]: E0117 12:09:40.389710 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.390464 kubelet[2790]: E0117 12:09:40.389997 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.390464 kubelet[2790]: W0117 12:09:40.390005 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.390464 kubelet[2790]: E0117 12:09:40.390015 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.390464 kubelet[2790]: E0117 12:09:40.390208 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.390464 kubelet[2790]: W0117 12:09:40.390215 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.390464 kubelet[2790]: E0117 12:09:40.390223 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.390909 kubelet[2790]: E0117 12:09:40.390780 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.390909 kubelet[2790]: W0117 12:09:40.390792 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.390909 kubelet[2790]: E0117 12:09:40.390803 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.391297 kubelet[2790]: E0117 12:09:40.390982 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.391297 kubelet[2790]: W0117 12:09:40.390989 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.391297 kubelet[2790]: E0117 12:09:40.390998 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.391297 kubelet[2790]: E0117 12:09:40.391233 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.391297 kubelet[2790]: W0117 12:09:40.391240 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.391297 kubelet[2790]: E0117 12:09:40.391249 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.391831 kubelet[2790]: E0117 12:09:40.391650 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.391831 kubelet[2790]: W0117 12:09:40.391660 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.391831 kubelet[2790]: E0117 12:09:40.391666 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.392062 kubelet[2790]: E0117 12:09:40.391956 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.392062 kubelet[2790]: W0117 12:09:40.391966 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.392062 kubelet[2790]: E0117 12:09:40.391975 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.392271 kubelet[2790]: E0117 12:09:40.392217 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.392271 kubelet[2790]: W0117 12:09:40.392224 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.392271 kubelet[2790]: E0117 12:09:40.392233 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.392573 kubelet[2790]: E0117 12:09:40.392522 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.392573 kubelet[2790]: W0117 12:09:40.392530 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.392573 kubelet[2790]: E0117 12:09:40.392539 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.392854 kubelet[2790]: E0117 12:09:40.392773 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.392854 kubelet[2790]: W0117 12:09:40.392781 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.392854 kubelet[2790]: E0117 12:09:40.392791 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.393079 kubelet[2790]: E0117 12:09:40.392994 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.393079 kubelet[2790]: W0117 12:09:40.393001 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.393079 kubelet[2790]: E0117 12:09:40.393009 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.393421 kubelet[2790]: E0117 12:09:40.393321 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.393421 kubelet[2790]: W0117 12:09:40.393331 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.393421 kubelet[2790]: E0117 12:09:40.393339 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.393631 kubelet[2790]: E0117 12:09:40.393575 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.393631 kubelet[2790]: W0117 12:09:40.393585 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.393631 kubelet[2790]: E0117 12:09:40.393594 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.393939 kubelet[2790]: E0117 12:09:40.393879 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.393939 kubelet[2790]: W0117 12:09:40.393890 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.393939 kubelet[2790]: E0117 12:09:40.393898 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.424259 kubelet[2790]: E0117 12:09:40.424186 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.424259 kubelet[2790]: W0117 12:09:40.424204 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.424259 kubelet[2790]: E0117 12:09:40.424221 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.424463 kubelet[2790]: E0117 12:09:40.424389 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.424463 kubelet[2790]: W0117 12:09:40.424394 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.424463 kubelet[2790]: E0117 12:09:40.424400 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.424582 kubelet[2790]: E0117 12:09:40.424516 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.424582 kubelet[2790]: W0117 12:09:40.424521 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.424582 kubelet[2790]: E0117 12:09:40.424528 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.424739 kubelet[2790]: E0117 12:09:40.424674 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.424739 kubelet[2790]: W0117 12:09:40.424682 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.424739 kubelet[2790]: E0117 12:09:40.424690 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.424839 kubelet[2790]: E0117 12:09:40.424797 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.424839 kubelet[2790]: W0117 12:09:40.424803 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.424839 kubelet[2790]: E0117 12:09:40.424819 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.424958 kubelet[2790]: E0117 12:09:40.424943 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.424958 kubelet[2790]: W0117 12:09:40.424954 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.425050 kubelet[2790]: E0117 12:09:40.424965 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.425107 kubelet[2790]: E0117 12:09:40.425095 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.425135 kubelet[2790]: W0117 12:09:40.425107 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.425135 kubelet[2790]: E0117 12:09:40.425122 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.425540 kubelet[2790]: E0117 12:09:40.425528 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.425540 kubelet[2790]: W0117 12:09:40.425538 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.425686 kubelet[2790]: E0117 12:09:40.425545 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.425686 kubelet[2790]: E0117 12:09:40.425662 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.425686 kubelet[2790]: W0117 12:09:40.425666 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.425686 kubelet[2790]: E0117 12:09:40.425675 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.425790 kubelet[2790]: E0117 12:09:40.425778 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.425790 kubelet[2790]: W0117 12:09:40.425783 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.426107 kubelet[2790]: E0117 12:09:40.425854 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.426107 kubelet[2790]: E0117 12:09:40.425911 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.426107 kubelet[2790]: W0117 12:09:40.425915 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.426107 kubelet[2790]: E0117 12:09:40.425923 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.426310 kubelet[2790]: E0117 12:09:40.426190 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.426310 kubelet[2790]: W0117 12:09:40.426200 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.426310 kubelet[2790]: E0117 12:09:40.426214 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.426591 kubelet[2790]: E0117 12:09:40.426453 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.426591 kubelet[2790]: W0117 12:09:40.426463 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.426591 kubelet[2790]: E0117 12:09:40.426477 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.426670 kubelet[2790]: E0117 12:09:40.426622 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.426670 kubelet[2790]: W0117 12:09:40.426627 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.426670 kubelet[2790]: E0117 12:09:40.426640 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.426732 kubelet[2790]: E0117 12:09:40.426724 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.426732 kubelet[2790]: W0117 12:09:40.426729 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.426768 kubelet[2790]: E0117 12:09:40.426737 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.426854 kubelet[2790]: E0117 12:09:40.426842 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.426854 kubelet[2790]: W0117 12:09:40.426850 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.426854 kubelet[2790]: E0117 12:09:40.426858 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.427115 kubelet[2790]: E0117 12:09:40.427038 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.427115 kubelet[2790]: W0117 12:09:40.427046 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.427115 kubelet[2790]: E0117 12:09:40.427060 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:40.427226 kubelet[2790]: E0117 12:09:40.427217 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:40.427277 kubelet[2790]: W0117 12:09:40.427256 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:40.427277 kubelet[2790]: E0117 12:09:40.427265 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.267654 containerd[1550]: time="2025-01-17T12:09:41.267557046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:41.272477 containerd[1550]: time="2025-01-17T12:09:41.272439108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:09:41.277436 containerd[1550]: time="2025-01-17T12:09:41.277403427Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:41.281626 containerd[1550]: time="2025-01-17T12:09:41.281574086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:41.282256 containerd[1550]: time="2025-01-17T12:09:41.281987083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.696556099s" Jan 17 12:09:41.282256 containerd[1550]: time="2025-01-17T12:09:41.282012276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:09:41.284212 containerd[1550]: time="2025-01-17T12:09:41.284034952Z" level=info msg="CreateContainer within sandbox \"c84ed212416e292ef4cd3933cbd5ecd1faf1f0478495710d8cbd4583c3f52843\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:09:41.297757 kubelet[2790]: I0117 12:09:41.297715 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:09:41.298310 containerd[1550]: time="2025-01-17T12:09:41.298185713Z" level=info msg="CreateContainer within sandbox \"c84ed212416e292ef4cd3933cbd5ecd1faf1f0478495710d8cbd4583c3f52843\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134\"" Jan 17 12:09:41.298806 containerd[1550]: time="2025-01-17T12:09:41.298783372Z" level=info msg="StartContainer for \"2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134\"" Jan 17 12:09:41.300018 kubelet[2790]: E0117 12:09:41.299890 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.300018 kubelet[2790]: W0117 12:09:41.299901 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.300018 kubelet[2790]: E0117 12:09:41.299912 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.300363 kubelet[2790]: E0117 12:09:41.300238 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.300363 kubelet[2790]: W0117 12:09:41.300245 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.300363 kubelet[2790]: E0117 12:09:41.300252 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.300611 kubelet[2790]: E0117 12:09:41.300513 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.300611 kubelet[2790]: W0117 12:09:41.300518 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.300611 kubelet[2790]: E0117 12:09:41.300524 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.301298 kubelet[2790]: E0117 12:09:41.301101 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.301298 kubelet[2790]: W0117 12:09:41.301108 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.301298 kubelet[2790]: E0117 12:09:41.301115 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.301477 kubelet[2790]: E0117 12:09:41.301383 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.301477 kubelet[2790]: W0117 12:09:41.301388 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.301477 kubelet[2790]: E0117 12:09:41.301394 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.302036 kubelet[2790]: E0117 12:09:41.301647 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.302036 kubelet[2790]: W0117 12:09:41.301653 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.302036 kubelet[2790]: E0117 12:09:41.301697 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.302036 kubelet[2790]: E0117 12:09:41.301836 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.302036 kubelet[2790]: W0117 12:09:41.301841 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.302036 kubelet[2790]: E0117 12:09:41.301846 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.302036 kubelet[2790]: E0117 12:09:41.301944 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.302036 kubelet[2790]: W0117 12:09:41.301948 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.302036 kubelet[2790]: E0117 12:09:41.301953 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.302269 kubelet[2790]: E0117 12:09:41.302210 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.302269 kubelet[2790]: W0117 12:09:41.302215 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.302269 kubelet[2790]: E0117 12:09:41.302220 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.302462 kubelet[2790]: E0117 12:09:41.302323 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.302462 kubelet[2790]: W0117 12:09:41.302328 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.302462 kubelet[2790]: E0117 12:09:41.302333 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.302618 kubelet[2790]: E0117 12:09:41.302510 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.302618 kubelet[2790]: W0117 12:09:41.302515 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.302618 kubelet[2790]: E0117 12:09:41.302519 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.302770 kubelet[2790]: E0117 12:09:41.302670 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.302770 kubelet[2790]: W0117 12:09:41.302674 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.302770 kubelet[2790]: E0117 12:09:41.302679 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.302897 kubelet[2790]: E0117 12:09:41.302862 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.302897 kubelet[2790]: W0117 12:09:41.302868 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.302897 kubelet[2790]: E0117 12:09:41.302874 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.303149 kubelet[2790]: E0117 12:09:41.303085 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.303149 kubelet[2790]: W0117 12:09:41.303090 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.303149 kubelet[2790]: E0117 12:09:41.303096 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.303262 kubelet[2790]: E0117 12:09:41.303201 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.303262 kubelet[2790]: W0117 12:09:41.303205 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.303262 kubelet[2790]: E0117 12:09:41.303210 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.316462 systemd[1]: run-containerd-runc-k8s.io-2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134-runc.oY3G73.mount: Deactivated successfully. Jan 17 12:09:41.323411 systemd[1]: Started cri-containerd-2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134.scope - libcontainer container 2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134. Jan 17 12:09:41.330534 kubelet[2790]: E0117 12:09:41.330439 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.330534 kubelet[2790]: W0117 12:09:41.330452 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.330534 kubelet[2790]: E0117 12:09:41.330464 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.330928 kubelet[2790]: E0117 12:09:41.330756 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.330928 kubelet[2790]: W0117 12:09:41.330763 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.330928 kubelet[2790]: E0117 12:09:41.330904 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.331241 kubelet[2790]: E0117 12:09:41.331230 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.331274 kubelet[2790]: W0117 12:09:41.331241 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.331274 kubelet[2790]: E0117 12:09:41.331253 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.332241 kubelet[2790]: E0117 12:09:41.331676 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.332241 kubelet[2790]: W0117 12:09:41.331683 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.332241 kubelet[2790]: E0117 12:09:41.331692 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.332241 kubelet[2790]: E0117 12:09:41.331783 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.332241 kubelet[2790]: W0117 12:09:41.331787 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.332241 kubelet[2790]: E0117 12:09:41.331792 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.332241 kubelet[2790]: E0117 12:09:41.331904 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.332241 kubelet[2790]: W0117 12:09:41.331908 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.332241 kubelet[2790]: E0117 12:09:41.331915 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.332241 kubelet[2790]: E0117 12:09:41.332204 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.332545 kubelet[2790]: W0117 12:09:41.332209 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.332545 kubelet[2790]: E0117 12:09:41.332213 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.332545 kubelet[2790]: E0117 12:09:41.332336 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.332545 kubelet[2790]: W0117 12:09:41.332340 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.332545 kubelet[2790]: E0117 12:09:41.332348 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.332545 kubelet[2790]: E0117 12:09:41.332463 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.332545 kubelet[2790]: W0117 12:09:41.332467 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.332545 kubelet[2790]: E0117 12:09:41.332472 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.332670 kubelet[2790]: E0117 12:09:41.332559 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.332670 kubelet[2790]: W0117 12:09:41.332574 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.332670 kubelet[2790]: E0117 12:09:41.332579 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.332670 kubelet[2790]: E0117 12:09:41.332661 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.332670 kubelet[2790]: W0117 12:09:41.332666 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.332670 kubelet[2790]: E0117 12:09:41.332670 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.332985 kubelet[2790]: E0117 12:09:41.332766 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.332985 kubelet[2790]: W0117 12:09:41.332772 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.332985 kubelet[2790]: E0117 12:09:41.332777 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.333167 kubelet[2790]: E0117 12:09:41.333157 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.333167 kubelet[2790]: W0117 12:09:41.333165 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.333228 kubelet[2790]: E0117 12:09:41.333217 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.333378 kubelet[2790]: E0117 12:09:41.333365 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.333378 kubelet[2790]: W0117 12:09:41.333375 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.333441 kubelet[2790]: E0117 12:09:41.333382 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.333539 kubelet[2790]: E0117 12:09:41.333527 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.333539 kubelet[2790]: W0117 12:09:41.333536 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.333993 kubelet[2790]: E0117 12:09:41.333541 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.333993 kubelet[2790]: E0117 12:09:41.333813 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.333993 kubelet[2790]: W0117 12:09:41.333818 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.333993 kubelet[2790]: E0117 12:09:41.333823 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.334102 kubelet[2790]: E0117 12:09:41.334040 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.334102 kubelet[2790]: W0117 12:09:41.334046 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.334102 kubelet[2790]: E0117 12:09:41.334052 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.334366 kubelet[2790]: E0117 12:09:41.334354 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:09:41.334366 kubelet[2790]: W0117 12:09:41.334362 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:09:41.334422 kubelet[2790]: E0117 12:09:41.334368 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:09:41.341592 containerd[1550]: time="2025-01-17T12:09:41.341565311Z" level=info msg="StartContainer for \"2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134\" returns successfully" Jan 17 12:09:41.351601 systemd[1]: cri-containerd-2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134.scope: Deactivated successfully. Jan 17 12:09:41.677729 containerd[1550]: time="2025-01-17T12:09:41.662567878Z" level=info msg="shim disconnected" id=2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134 namespace=k8s.io Jan 17 12:09:41.677729 containerd[1550]: time="2025-01-17T12:09:41.677574096Z" level=warning msg="cleaning up after shim disconnected" id=2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134 namespace=k8s.io Jan 17 12:09:41.677729 containerd[1550]: time="2025-01-17T12:09:41.677584703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:09:42.229869 kubelet[2790]: E0117 12:09:42.229837 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w95gg" podUID="e85f6860-3f73-4ce2-9930-90bfeb23aed3" Jan 17 12:09:42.289585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a35b25b2b2f098501de8b7b24596d6756c578c2a94d433ff1b45098dac39134-rootfs.mount: Deactivated successfully. Jan 17 12:09:42.303267 containerd[1550]: time="2025-01-17T12:09:42.303088645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:09:42.314328 kubelet[2790]: I0117 12:09:42.314149 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78b7959db9-r2pkv" podStartSLOduration=3.671114755 podStartE2EDuration="6.314135519s" podCreationTimestamp="2025-01-17 12:09:36 +0000 UTC" firstStartedPulling="2025-01-17 12:09:36.942062874 +0000 UTC m=+23.832977778" lastFinishedPulling="2025-01-17 12:09:39.585083638 +0000 UTC m=+26.475998542" observedRunningTime="2025-01-17 12:09:40.30619669 +0000 UTC m=+27.197111603" watchObservedRunningTime="2025-01-17 12:09:42.314135519 +0000 UTC m=+29.205050427" Jan 17 12:09:44.230201 kubelet[2790]: E0117 12:09:44.229924 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w95gg" podUID="e85f6860-3f73-4ce2-9930-90bfeb23aed3" Jan 17 12:09:45.680130 containerd[1550]: time="2025-01-17T12:09:45.680094257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:45.680914 containerd[1550]: time="2025-01-17T12:09:45.680880027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:09:45.681567 containerd[1550]: time="2025-01-17T12:09:45.681550895Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:45.682810 containerd[1550]: time="2025-01-17T12:09:45.682786347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:45.683694 containerd[1550]: time="2025-01-17T12:09:45.683591350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.38035606s" Jan 17 12:09:45.683694 containerd[1550]: time="2025-01-17T12:09:45.683611111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:09:45.685982 containerd[1550]: time="2025-01-17T12:09:45.685870000Z" level=info msg="CreateContainer within sandbox \"c84ed212416e292ef4cd3933cbd5ecd1faf1f0478495710d8cbd4583c3f52843\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:09:45.699656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227862904.mount: Deactivated successfully. Jan 17 12:09:45.713546 containerd[1550]: time="2025-01-17T12:09:45.713517714Z" level=info msg="CreateContainer within sandbox \"c84ed212416e292ef4cd3933cbd5ecd1faf1f0478495710d8cbd4583c3f52843\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"67d2738a7c9ae8b7833bf13143780cc7c86ac18aa0c0a158d62680697ad3cf57\"" Jan 17 12:09:45.714323 containerd[1550]: time="2025-01-17T12:09:45.714308823Z" level=info msg="StartContainer for \"67d2738a7c9ae8b7833bf13143780cc7c86ac18aa0c0a158d62680697ad3cf57\"" Jan 17 12:09:45.761420 systemd[1]: Started cri-containerd-67d2738a7c9ae8b7833bf13143780cc7c86ac18aa0c0a158d62680697ad3cf57.scope - libcontainer container 67d2738a7c9ae8b7833bf13143780cc7c86ac18aa0c0a158d62680697ad3cf57. Jan 17 12:09:45.778506 containerd[1550]: time="2025-01-17T12:09:45.778481469Z" level=info msg="StartContainer for \"67d2738a7c9ae8b7833bf13143780cc7c86ac18aa0c0a158d62680697ad3cf57\" returns successfully" Jan 17 12:09:46.230151 kubelet[2790]: E0117 12:09:46.230116 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w95gg" podUID="e85f6860-3f73-4ce2-9930-90bfeb23aed3" Jan 17 12:09:46.966012 systemd[1]: cri-containerd-67d2738a7c9ae8b7833bf13143780cc7c86ac18aa0c0a158d62680697ad3cf57.scope: Deactivated successfully. Jan 17 12:09:46.991646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67d2738a7c9ae8b7833bf13143780cc7c86ac18aa0c0a158d62680697ad3cf57-rootfs.mount: Deactivated successfully. Jan 17 12:09:46.992939 kubelet[2790]: I0117 12:09:46.991750 2790 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:09:46.998541 containerd[1550]: time="2025-01-17T12:09:46.995133876Z" level=info msg="shim disconnected" id=67d2738a7c9ae8b7833bf13143780cc7c86ac18aa0c0a158d62680697ad3cf57 namespace=k8s.io Jan 17 12:09:46.998991 containerd[1550]: time="2025-01-17T12:09:46.998517382Z" level=warning msg="cleaning up after shim disconnected" id=67d2738a7c9ae8b7833bf13143780cc7c86ac18aa0c0a158d62680697ad3cf57 namespace=k8s.io Jan 17 12:09:46.998991 containerd[1550]: time="2025-01-17T12:09:46.998787486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:09:47.023792 kubelet[2790]: I0117 12:09:47.023671 2790 topology_manager.go:215] "Topology Admit Handler" podUID="63fda226-0d46-4fc7-a365-e22d4880c1d7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8t4h2" Jan 17 12:09:47.028669 systemd[1]: Created slice kubepods-burstable-pod63fda226_0d46_4fc7_a365_e22d4880c1d7.slice - libcontainer container kubepods-burstable-pod63fda226_0d46_4fc7_a365_e22d4880c1d7.slice. Jan 17 12:09:47.034120 kubelet[2790]: I0117 12:09:47.033023 2790 topology_manager.go:215] "Topology Admit Handler" podUID="6c968e21-0471-4da2-80c9-10bfe6dba99c" podNamespace="calico-apiserver" podName="calico-apiserver-67c6c8b9b4-th4kr" Jan 17 12:09:47.037306 kubelet[2790]: I0117 12:09:47.037007 2790 topology_manager.go:215] "Topology Admit Handler" podUID="fff26461-98e7-492f-8649-772ea0be885d" podNamespace="calico-system" podName="calico-kube-controllers-786d6d6cf9-nwzjn" Jan 17 12:09:47.037306 kubelet[2790]: I0117 12:09:47.037124 2790 topology_manager.go:215] "Topology Admit Handler" podUID="e52c0d81-2873-48ab-812d-7481f2bdd74f" podNamespace="calico-apiserver" podName="calico-apiserver-67c6c8b9b4-svk24" Jan 17 12:09:47.037410 kubelet[2790]: I0117 12:09:47.037321 2790 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ae03d-0277-4872-b684-ccf00f39afa3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-974z6" Jan 17 12:09:47.041297 systemd[1]: Created slice kubepods-besteffort-pod6c968e21_0471_4da2_80c9_10bfe6dba99c.slice - libcontainer container kubepods-besteffort-pod6c968e21_0471_4da2_80c9_10bfe6dba99c.slice. Jan 17 12:09:47.046266 systemd[1]: Created slice kubepods-burstable-pod8a5ae03d_0277_4872_b684_ccf00f39afa3.slice - libcontainer container kubepods-burstable-pod8a5ae03d_0277_4872_b684_ccf00f39afa3.slice. Jan 17 12:09:47.050100 systemd[1]: Created slice kubepods-besteffort-podfff26461_98e7_492f_8649_772ea0be885d.slice - libcontainer container kubepods-besteffort-podfff26461_98e7_492f_8649_772ea0be885d.slice. Jan 17 12:09:47.057540 systemd[1]: Created slice kubepods-besteffort-pode52c0d81_2873_48ab_812d_7481f2bdd74f.slice - libcontainer container kubepods-besteffort-pode52c0d81_2873_48ab_812d_7481f2bdd74f.slice. Jan 17 12:09:47.066609 kubelet[2790]: I0117 12:09:47.066580 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a5ae03d-0277-4872-b684-ccf00f39afa3-config-volume\") pod \"coredns-7db6d8ff4d-974z6\" (UID: \"8a5ae03d-0277-4872-b684-ccf00f39afa3\") " pod="kube-system/coredns-7db6d8ff4d-974z6" Jan 17 12:09:47.066609 kubelet[2790]: I0117 12:09:47.066605 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvqxr\" (UniqueName: \"kubernetes.io/projected/6c968e21-0471-4da2-80c9-10bfe6dba99c-kube-api-access-nvqxr\") pod \"calico-apiserver-67c6c8b9b4-th4kr\" (UID: \"6c968e21-0471-4da2-80c9-10bfe6dba99c\") " pod="calico-apiserver/calico-apiserver-67c6c8b9b4-th4kr" Jan 17 12:09:47.066714 kubelet[2790]: I0117 12:09:47.066622 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e52c0d81-2873-48ab-812d-7481f2bdd74f-calico-apiserver-certs\") pod \"calico-apiserver-67c6c8b9b4-svk24\" (UID: \"e52c0d81-2873-48ab-812d-7481f2bdd74f\") " pod="calico-apiserver/calico-apiserver-67c6c8b9b4-svk24" Jan 17 12:09:47.066714 kubelet[2790]: I0117 12:09:47.066632 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmgr7\" (UniqueName: \"kubernetes.io/projected/8a5ae03d-0277-4872-b684-ccf00f39afa3-kube-api-access-dmgr7\") pod \"coredns-7db6d8ff4d-974z6\" (UID: \"8a5ae03d-0277-4872-b684-ccf00f39afa3\") " pod="kube-system/coredns-7db6d8ff4d-974z6" Jan 17 12:09:47.066714 kubelet[2790]: I0117 12:09:47.066646 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml66l\" (UniqueName: \"kubernetes.io/projected/63fda226-0d46-4fc7-a365-e22d4880c1d7-kube-api-access-ml66l\") pod \"coredns-7db6d8ff4d-8t4h2\" (UID: \"63fda226-0d46-4fc7-a365-e22d4880c1d7\") " pod="kube-system/coredns-7db6d8ff4d-8t4h2" Jan 17 12:09:47.066714 kubelet[2790]: I0117 12:09:47.066656 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2q2\" (UniqueName: \"kubernetes.io/projected/e52c0d81-2873-48ab-812d-7481f2bdd74f-kube-api-access-rc2q2\") pod \"calico-apiserver-67c6c8b9b4-svk24\" (UID: \"e52c0d81-2873-48ab-812d-7481f2bdd74f\") " pod="calico-apiserver/calico-apiserver-67c6c8b9b4-svk24" Jan 17 12:09:47.066714 kubelet[2790]: I0117 12:09:47.066675 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6c968e21-0471-4da2-80c9-10bfe6dba99c-calico-apiserver-certs\") pod \"calico-apiserver-67c6c8b9b4-th4kr\" (UID: \"6c968e21-0471-4da2-80c9-10bfe6dba99c\") " pod="calico-apiserver/calico-apiserver-67c6c8b9b4-th4kr" Jan 17 12:09:47.067189 kubelet[2790]: I0117 12:09:47.066686 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fff26461-98e7-492f-8649-772ea0be885d-tigera-ca-bundle\") pod \"calico-kube-controllers-786d6d6cf9-nwzjn\" (UID: \"fff26461-98e7-492f-8649-772ea0be885d\") " pod="calico-system/calico-kube-controllers-786d6d6cf9-nwzjn" Jan 17 12:09:47.067189 kubelet[2790]: I0117 12:09:47.066694 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63fda226-0d46-4fc7-a365-e22d4880c1d7-config-volume\") pod \"coredns-7db6d8ff4d-8t4h2\" (UID: \"63fda226-0d46-4fc7-a365-e22d4880c1d7\") " pod="kube-system/coredns-7db6d8ff4d-8t4h2" Jan 17 12:09:47.067189 kubelet[2790]: I0117 12:09:47.066703 2790 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42z6p\" (UniqueName: \"kubernetes.io/projected/fff26461-98e7-492f-8649-772ea0be885d-kube-api-access-42z6p\") pod \"calico-kube-controllers-786d6d6cf9-nwzjn\" (UID: \"fff26461-98e7-492f-8649-772ea0be885d\") " pod="calico-system/calico-kube-controllers-786d6d6cf9-nwzjn" Jan 17 12:09:47.318103 containerd[1550]: time="2025-01-17T12:09:47.317860190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:09:47.331082 containerd[1550]: time="2025-01-17T12:09:47.331042309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8t4h2,Uid:63fda226-0d46-4fc7-a365-e22d4880c1d7,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:47.345044 containerd[1550]: time="2025-01-17T12:09:47.344926622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c6c8b9b4-th4kr,Uid:6c968e21-0471-4da2-80c9-10bfe6dba99c,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:09:47.379700 containerd[1550]: time="2025-01-17T12:09:47.379668528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c6c8b9b4-svk24,Uid:e52c0d81-2873-48ab-812d-7481f2bdd74f,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:09:47.380090 containerd[1550]: time="2025-01-17T12:09:47.380077431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-974z6,Uid:8a5ae03d-0277-4872-b684-ccf00f39afa3,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:47.380446 containerd[1550]: time="2025-01-17T12:09:47.380370964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-786d6d6cf9-nwzjn,Uid:fff26461-98e7-492f-8649-772ea0be885d,Namespace:calico-system,Attempt:0,}" Jan 17 12:09:47.572610 containerd[1550]: time="2025-01-17T12:09:47.572235189Z" level=error msg="Failed to destroy network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.575297 containerd[1550]: time="2025-01-17T12:09:47.574953718Z" level=error msg="encountered an error cleaning up failed sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.575297 containerd[1550]: time="2025-01-17T12:09:47.574992079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c6c8b9b4-th4kr,Uid:6c968e21-0471-4da2-80c9-10bfe6dba99c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.575513 kubelet[2790]: E0117 12:09:47.575481 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.576782 kubelet[2790]: E0117 12:09:47.575529 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-th4kr" Jan 17 12:09:47.576782 kubelet[2790]: E0117 12:09:47.575542 2790 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-th4kr" Jan 17 12:09:47.576782 kubelet[2790]: E0117 12:09:47.575576 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c6c8b9b4-th4kr_calico-apiserver(6c968e21-0471-4da2-80c9-10bfe6dba99c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c6c8b9b4-th4kr_calico-apiserver(6c968e21-0471-4da2-80c9-10bfe6dba99c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-th4kr" podUID="6c968e21-0471-4da2-80c9-10bfe6dba99c" Jan 17 12:09:47.586400 containerd[1550]: time="2025-01-17T12:09:47.585113017Z" level=error msg="Failed to destroy network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.586400 containerd[1550]: time="2025-01-17T12:09:47.585342138Z" level=error msg="encountered an error cleaning up failed sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.586400 containerd[1550]: time="2025-01-17T12:09:47.585372802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-786d6d6cf9-nwzjn,Uid:fff26461-98e7-492f-8649-772ea0be885d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.587517 kubelet[2790]: E0117 12:09:47.585509 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.587517 kubelet[2790]: E0117 12:09:47.585543 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-786d6d6cf9-nwzjn" Jan 17 12:09:47.587517 kubelet[2790]: E0117 12:09:47.585556 2790 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-786d6d6cf9-nwzjn" Jan 17 12:09:47.588210 containerd[1550]: time="2025-01-17T12:09:47.587381481Z" level=error msg="Failed to destroy network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.588240 kubelet[2790]: E0117 12:09:47.585593 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-786d6d6cf9-nwzjn_calico-system(fff26461-98e7-492f-8649-772ea0be885d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-786d6d6cf9-nwzjn_calico-system(fff26461-98e7-492f-8649-772ea0be885d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-786d6d6cf9-nwzjn" podUID="fff26461-98e7-492f-8649-772ea0be885d" Jan 17 12:09:47.588569 containerd[1550]: time="2025-01-17T12:09:47.588552861Z" level=error msg="encountered an error cleaning up failed sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.588859 containerd[1550]: time="2025-01-17T12:09:47.588805323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8t4h2,Uid:63fda226-0d46-4fc7-a365-e22d4880c1d7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.588938 kubelet[2790]: E0117 12:09:47.588920 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.589079 kubelet[2790]: E0117 12:09:47.588946 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8t4h2" Jan 17 12:09:47.589108 kubelet[2790]: E0117 12:09:47.589084 2790 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8t4h2" Jan 17 12:09:47.589132 kubelet[2790]: E0117 12:09:47.589120 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8t4h2_kube-system(63fda226-0d46-4fc7-a365-e22d4880c1d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8t4h2_kube-system(63fda226-0d46-4fc7-a365-e22d4880c1d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8t4h2" podUID="63fda226-0d46-4fc7-a365-e22d4880c1d7" Jan 17 12:09:47.590917 containerd[1550]: time="2025-01-17T12:09:47.590473972Z" level=error msg="Failed to destroy network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.590917 containerd[1550]: time="2025-01-17T12:09:47.590681726Z" level=error msg="encountered an error cleaning up failed sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.590917 containerd[1550]: time="2025-01-17T12:09:47.590707469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c6c8b9b4-svk24,Uid:e52c0d81-2873-48ab-812d-7481f2bdd74f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.591759 kubelet[2790]: E0117 12:09:47.591234 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.591956 kubelet[2790]: E0117 12:09:47.591272 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-svk24" Jan 17 12:09:47.591956 kubelet[2790]: E0117 12:09:47.591876 2790 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-svk24" Jan 17 12:09:47.591956 kubelet[2790]: E0117 12:09:47.591903 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c6c8b9b4-svk24_calico-apiserver(e52c0d81-2873-48ab-812d-7481f2bdd74f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c6c8b9b4-svk24_calico-apiserver(e52c0d81-2873-48ab-812d-7481f2bdd74f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-svk24" podUID="e52c0d81-2873-48ab-812d-7481f2bdd74f" Jan 17 12:09:47.593942 containerd[1550]: time="2025-01-17T12:09:47.593888016Z" level=error msg="Failed to destroy network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.594098 containerd[1550]: time="2025-01-17T12:09:47.594085655Z" level=error msg="encountered an error cleaning up failed sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.594124 containerd[1550]: time="2025-01-17T12:09:47.594111869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-974z6,Uid:8a5ae03d-0277-4872-b684-ccf00f39afa3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.594390 kubelet[2790]: E0117 12:09:47.594210 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:47.594390 kubelet[2790]: E0117 12:09:47.594233 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-974z6" Jan 17 12:09:47.594390 kubelet[2790]: E0117 12:09:47.594244 2790 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-974z6" Jan 17 12:09:47.594775 kubelet[2790]: E0117 12:09:47.594263 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-974z6_kube-system(8a5ae03d-0277-4872-b684-ccf00f39afa3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-974z6_kube-system(8a5ae03d-0277-4872-b684-ccf00f39afa3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-974z6" podUID="8a5ae03d-0277-4872-b684-ccf00f39afa3" Jan 17 12:09:47.993348 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed-shm.mount: Deactivated successfully. Jan 17 12:09:48.233755 systemd[1]: Created slice kubepods-besteffort-pode85f6860_3f73_4ce2_9930_90bfeb23aed3.slice - libcontainer container kubepods-besteffort-pode85f6860_3f73_4ce2_9930_90bfeb23aed3.slice. Jan 17 12:09:48.241380 containerd[1550]: time="2025-01-17T12:09:48.235541786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w95gg,Uid:e85f6860-3f73-4ce2-9930-90bfeb23aed3,Namespace:calico-system,Attempt:0,}" Jan 17 12:09:48.312506 kubelet[2790]: I0117 12:09:48.312471 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:09:48.314579 kubelet[2790]: I0117 12:09:48.314550 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:09:48.376089 containerd[1550]: time="2025-01-17T12:09:48.376060468Z" level=error msg="Failed to destroy network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:48.376560 containerd[1550]: time="2025-01-17T12:09:48.376463726Z" level=error msg="encountered an error cleaning up failed sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:48.376560 containerd[1550]: time="2025-01-17T12:09:48.376500765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w95gg,Uid:e85f6860-3f73-4ce2-9930-90bfeb23aed3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:48.377740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76-shm.mount: Deactivated successfully. Jan 17 12:09:48.382530 kubelet[2790]: E0117 12:09:48.382363 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:48.382530 kubelet[2790]: E0117 12:09:48.382398 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w95gg" Jan 17 12:09:48.382530 kubelet[2790]: E0117 12:09:48.382411 2790 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w95gg" Jan 17 12:09:48.383369 kubelet[2790]: E0117 12:09:48.382434 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w95gg_calico-system(e85f6860-3f73-4ce2-9930-90bfeb23aed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w95gg_calico-system(e85f6860-3f73-4ce2-9930-90bfeb23aed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w95gg" podUID="e85f6860-3f73-4ce2-9930-90bfeb23aed3" Jan 17 12:09:48.384102 kubelet[2790]: I0117 12:09:48.383923 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:09:48.385764 kubelet[2790]: I0117 12:09:48.385744 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:09:48.386494 kubelet[2790]: I0117 12:09:48.386333 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:09:48.435397 containerd[1550]: time="2025-01-17T12:09:48.434675218Z" level=info msg="StopPodSandbox for \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\"" Jan 17 12:09:48.442036 containerd[1550]: time="2025-01-17T12:09:48.435542610Z" level=info msg="StopPodSandbox for \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\"" Jan 17 12:09:48.442036 containerd[1550]: time="2025-01-17T12:09:48.436074635Z" level=info msg="Ensure that sandbox 69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0 in task-service has been cleanup successfully" Jan 17 12:09:48.442036 containerd[1550]: time="2025-01-17T12:09:48.436106921Z" level=info msg="StopPodSandbox for \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\"" Jan 17 12:09:48.442036 containerd[1550]: time="2025-01-17T12:09:48.436195284Z" level=info msg="Ensure that sandbox d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed in task-service has been cleanup successfully" Jan 17 12:09:48.442036 containerd[1550]: time="2025-01-17T12:09:48.440483877Z" level=info msg="StopPodSandbox for \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\"" Jan 17 12:09:48.442036 containerd[1550]: time="2025-01-17T12:09:48.440576719Z" level=info msg="Ensure that sandbox 6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1 in task-service has been cleanup successfully" Jan 17 12:09:48.442036 containerd[1550]: time="2025-01-17T12:09:48.436081297Z" level=info msg="Ensure that sandbox 2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be in task-service has been cleanup successfully" Jan 17 12:09:48.442036 containerd[1550]: time="2025-01-17T12:09:48.441202529Z" level=info msg="StopPodSandbox for \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\"" Jan 17 12:09:48.442036 containerd[1550]: time="2025-01-17T12:09:48.441347121Z" level=info msg="Ensure that sandbox 82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8 in task-service has been cleanup successfully" Jan 17 12:09:48.467466 containerd[1550]: time="2025-01-17T12:09:48.467328540Z" level=error msg="StopPodSandbox for \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\" failed" error="failed to destroy network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:48.467635 kubelet[2790]: E0117 12:09:48.467473 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:09:48.467635 kubelet[2790]: E0117 12:09:48.467515 2790 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1"} Jan 17 12:09:48.467635 kubelet[2790]: E0117 12:09:48.467552 2790 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fff26461-98e7-492f-8649-772ea0be885d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:09:48.467635 kubelet[2790]: E0117 12:09:48.467578 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fff26461-98e7-492f-8649-772ea0be885d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-786d6d6cf9-nwzjn" podUID="fff26461-98e7-492f-8649-772ea0be885d" Jan 17 12:09:48.487546 containerd[1550]: time="2025-01-17T12:09:48.487261584Z" level=error msg="StopPodSandbox for \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\" failed" error="failed to destroy network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:48.487628 kubelet[2790]: E0117 12:09:48.487410 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:09:48.487628 kubelet[2790]: E0117 12:09:48.487436 2790 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed"} Jan 17 12:09:48.487628 kubelet[2790]: E0117 12:09:48.487457 2790 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63fda226-0d46-4fc7-a365-e22d4880c1d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:09:48.487628 kubelet[2790]: E0117 12:09:48.487470 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63fda226-0d46-4fc7-a365-e22d4880c1d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8t4h2" podUID="63fda226-0d46-4fc7-a365-e22d4880c1d7" Jan 17 12:09:48.489683 containerd[1550]: time="2025-01-17T12:09:48.489652917Z" level=error msg="StopPodSandbox for \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\" failed" error="failed to destroy network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:48.489843 containerd[1550]: time="2025-01-17T12:09:48.489758831Z" level=error msg="StopPodSandbox for \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\" failed" error="failed to destroy network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:48.489872 kubelet[2790]: E0117 12:09:48.489745 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:09:48.489872 kubelet[2790]: E0117 12:09:48.489764 2790 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0"} Jan 17 12:09:48.489872 kubelet[2790]: E0117 12:09:48.489782 2790 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a5ae03d-0277-4872-b684-ccf00f39afa3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:09:48.489872 kubelet[2790]: E0117 12:09:48.489794 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a5ae03d-0277-4872-b684-ccf00f39afa3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-974z6" podUID="8a5ae03d-0277-4872-b684-ccf00f39afa3" Jan 17 12:09:48.489971 kubelet[2790]: E0117 12:09:48.489827 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:09:48.489971 kubelet[2790]: E0117 12:09:48.489843 2790 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be"} Jan 17 12:09:48.489971 kubelet[2790]: E0117 12:09:48.489860 2790 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e52c0d81-2873-48ab-812d-7481f2bdd74f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:09:48.489971 kubelet[2790]: E0117 12:09:48.489871 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e52c0d81-2873-48ab-812d-7481f2bdd74f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-svk24" podUID="e52c0d81-2873-48ab-812d-7481f2bdd74f" Jan 17 12:09:48.492611 containerd[1550]: time="2025-01-17T12:09:48.492575629Z" level=error msg="StopPodSandbox for \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\" failed" error="failed to destroy network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:48.492770 kubelet[2790]: E0117 12:09:48.492691 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:09:48.492770 kubelet[2790]: E0117 12:09:48.492711 2790 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8"} Jan 17 12:09:48.492770 kubelet[2790]: E0117 12:09:48.492726 2790 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c968e21-0471-4da2-80c9-10bfe6dba99c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:09:48.492770 kubelet[2790]: E0117 12:09:48.492737 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c968e21-0471-4da2-80c9-10bfe6dba99c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-th4kr" podUID="6c968e21-0471-4da2-80c9-10bfe6dba99c" Jan 17 12:09:49.388568 kubelet[2790]: I0117 12:09:49.388539 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:09:49.389470 containerd[1550]: time="2025-01-17T12:09:49.389337224Z" level=info msg="StopPodSandbox for \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\"" Jan 17 12:09:49.389722 containerd[1550]: time="2025-01-17T12:09:49.389469698Z" level=info msg="Ensure that sandbox 674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76 in task-service has been cleanup successfully" Jan 17 12:09:49.427666 containerd[1550]: time="2025-01-17T12:09:49.427586894Z" level=error msg="StopPodSandbox for \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\" failed" error="failed to destroy network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:09:49.427868 kubelet[2790]: E0117 12:09:49.427833 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:09:49.427918 kubelet[2790]: E0117 12:09:49.427876 2790 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76"} Jan 17 12:09:49.427918 kubelet[2790]: E0117 12:09:49.427901 2790 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e85f6860-3f73-4ce2-9930-90bfeb23aed3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:09:49.428005 kubelet[2790]: E0117 12:09:49.427923 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e85f6860-3f73-4ce2-9930-90bfeb23aed3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w95gg" podUID="e85f6860-3f73-4ce2-9930-90bfeb23aed3" Jan 17 12:09:52.831570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615639649.mount: Deactivated successfully. Jan 17 12:09:52.960871 containerd[1550]: time="2025-01-17T12:09:52.960821884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:09:52.977415 containerd[1550]: time="2025-01-17T12:09:52.977387442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:53.000513 containerd[1550]: time="2025-01-17T12:09:53.000486126Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:53.001258 containerd[1550]: time="2025-01-17T12:09:53.001241187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:53.003211 containerd[1550]: time="2025-01-17T12:09:53.003191335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.683007796s" Jan 17 12:09:53.003242 containerd[1550]: time="2025-01-17T12:09:53.003214677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:09:53.097853 containerd[1550]: time="2025-01-17T12:09:53.097736975Z" level=info msg="CreateContainer within sandbox \"c84ed212416e292ef4cd3933cbd5ecd1faf1f0478495710d8cbd4583c3f52843\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:09:53.162868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664476781.mount: Deactivated successfully. Jan 17 12:09:53.166591 containerd[1550]: time="2025-01-17T12:09:53.166564947Z" level=info msg="CreateContainer within sandbox \"c84ed212416e292ef4cd3933cbd5ecd1faf1f0478495710d8cbd4583c3f52843\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"61fd4c506ecbf36cb38d1b8e6f59b13c9a654d94a949c5f21f4cba23a1a525db\"" Jan 17 12:09:53.196546 containerd[1550]: time="2025-01-17T12:09:53.196144097Z" level=info msg="StartContainer for \"61fd4c506ecbf36cb38d1b8e6f59b13c9a654d94a949c5f21f4cba23a1a525db\"" Jan 17 12:09:53.273378 systemd[1]: Started cri-containerd-61fd4c506ecbf36cb38d1b8e6f59b13c9a654d94a949c5f21f4cba23a1a525db.scope - libcontainer container 61fd4c506ecbf36cb38d1b8e6f59b13c9a654d94a949c5f21f4cba23a1a525db. Jan 17 12:09:53.293077 containerd[1550]: time="2025-01-17T12:09:53.293050693Z" level=info msg="StartContainer for \"61fd4c506ecbf36cb38d1b8e6f59b13c9a654d94a949c5f21f4cba23a1a525db\" returns successfully" Jan 17 12:09:53.470296 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:09:53.472186 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:09:53.789638 kubelet[2790]: I0117 12:09:53.787962 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-znvlg" podStartSLOduration=1.6885225350000002 podStartE2EDuration="17.768001321s" podCreationTimestamp="2025-01-17 12:09:36 +0000 UTC" firstStartedPulling="2025-01-17 12:09:36.953065354 +0000 UTC m=+23.843980258" lastFinishedPulling="2025-01-17 12:09:53.03254414 +0000 UTC m=+39.923459044" observedRunningTime="2025-01-17 12:09:53.765118978 +0000 UTC m=+40.656033891" watchObservedRunningTime="2025-01-17 12:09:53.768001321 +0000 UTC m=+40.658916231" Jan 17 12:09:55.540458 systemd[1]: run-containerd-runc-k8s.io-61fd4c506ecbf36cb38d1b8e6f59b13c9a654d94a949c5f21f4cba23a1a525db-runc.bGpnxU.mount: Deactivated successfully. Jan 17 12:10:00.230372 containerd[1550]: time="2025-01-17T12:10:00.230145668Z" level=info msg="StopPodSandbox for \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\"" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.288 [INFO][4158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.288 [INFO][4158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" iface="eth0" netns="/var/run/netns/cni-32c69f90-38cb-0b28-81da-ca3e63dee32d" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.288 [INFO][4158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" iface="eth0" netns="/var/run/netns/cni-32c69f90-38cb-0b28-81da-ca3e63dee32d" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.290 [INFO][4158] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" iface="eth0" netns="/var/run/netns/cni-32c69f90-38cb-0b28-81da-ca3e63dee32d" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.290 [INFO][4158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.290 [INFO][4158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.473 [INFO][4165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" HandleID="k8s-pod-network.d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.476 [INFO][4165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.477 [INFO][4165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.487 [WARNING][4165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" HandleID="k8s-pod-network.d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.487 [INFO][4165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" HandleID="k8s-pod-network.d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.487 [INFO][4165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:00.489698 containerd[1550]: 2025-01-17 12:10:00.488 [INFO][4158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:00.491221 systemd[1]: run-netns-cni\x2d32c69f90\x2d38cb\x2d0b28\x2d81da\x2dca3e63dee32d.mount: Deactivated successfully. Jan 17 12:10:00.496486 containerd[1550]: time="2025-01-17T12:10:00.496462615Z" level=info msg="TearDown network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\" successfully" Jan 17 12:10:00.496530 containerd[1550]: time="2025-01-17T12:10:00.496486693Z" level=info msg="StopPodSandbox for \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\" returns successfully" Jan 17 12:10:00.496944 containerd[1550]: time="2025-01-17T12:10:00.496928082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8t4h2,Uid:63fda226-0d46-4fc7-a365-e22d4880c1d7,Namespace:kube-system,Attempt:1,}" Jan 17 12:10:00.583565 systemd-networkd[1473]: cali7de8843e776: Link UP Jan 17 12:10:00.583857 systemd-networkd[1473]: cali7de8843e776: Gained carrier Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.519 [INFO][4174] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.527 [INFO][4174] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0 coredns-7db6d8ff4d- kube-system 63fda226-0d46-4fc7-a365-e22d4880c1d7 740 0 2025-01-17 12:09:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-8t4h2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7de8843e776 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8t4h2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8t4h2-" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.527 [INFO][4174] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8t4h2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.550 [INFO][4184] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" HandleID="k8s-pod-network.3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.556 [INFO][4184] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" HandleID="k8s-pod-network.3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ed0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-8t4h2", "timestamp":"2025-01-17 12:10:00.550740346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.556 [INFO][4184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.556 [INFO][4184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.556 [INFO][4184] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.558 [INFO][4184] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" host="localhost" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.562 [INFO][4184] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.565 [INFO][4184] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.566 [INFO][4184] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.568 [INFO][4184] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.568 [INFO][4184] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" host="localhost" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.569 [INFO][4184] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.571 [INFO][4184] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" host="localhost" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.573 [INFO][4184] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" host="localhost" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.573 [INFO][4184] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" host="localhost" Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.573 [INFO][4184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:00.592115 containerd[1550]: 2025-01-17 12:10:00.573 [INFO][4184] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" HandleID="k8s-pod-network.3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.595946 containerd[1550]: 2025-01-17 12:10:00.575 [INFO][4174] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8t4h2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"63fda226-0d46-4fc7-a365-e22d4880c1d7", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-8t4h2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7de8843e776", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:00.595946 containerd[1550]: 2025-01-17 12:10:00.575 [INFO][4174] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8t4h2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.595946 containerd[1550]: 2025-01-17 12:10:00.575 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7de8843e776 ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8t4h2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.595946 containerd[1550]: 2025-01-17 12:10:00.581 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8t4h2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.595946 containerd[1550]: 2025-01-17 12:10:00.581 [INFO][4174] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8t4h2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"63fda226-0d46-4fc7-a365-e22d4880c1d7", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f", Pod:"coredns-7db6d8ff4d-8t4h2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7de8843e776", MAC:"2a:5f:9a:2c:3a:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:00.595946 containerd[1550]: 2025-01-17 12:10:00.590 [INFO][4174] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8t4h2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:00.611565 containerd[1550]: time="2025-01-17T12:10:00.611498428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:00.611565 containerd[1550]: time="2025-01-17T12:10:00.611528473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:00.614748 containerd[1550]: time="2025-01-17T12:10:00.611861412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:00.614748 containerd[1550]: time="2025-01-17T12:10:00.611971173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:00.630796 systemd[1]: Started cri-containerd-3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f.scope - libcontainer container 3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f. Jan 17 12:10:00.640147 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:10:00.663824 containerd[1550]: time="2025-01-17T12:10:00.663744501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8t4h2,Uid:63fda226-0d46-4fc7-a365-e22d4880c1d7,Namespace:kube-system,Attempt:1,} returns sandbox id \"3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f\"" Jan 17 12:10:00.666901 containerd[1550]: time="2025-01-17T12:10:00.666582179Z" level=info msg="CreateContainer within sandbox \"3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:10:00.682129 containerd[1550]: time="2025-01-17T12:10:00.682100224Z" level=info msg="CreateContainer within sandbox \"3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c87bafa387ea960c077e8fbac958410938e184e23377f189c71a34727db901a9\"" Jan 17 12:10:00.683253 containerd[1550]: time="2025-01-17T12:10:00.682444142Z" level=info msg="StartContainer for \"c87bafa387ea960c077e8fbac958410938e184e23377f189c71a34727db901a9\"" Jan 17 12:10:00.702371 systemd[1]: Started cri-containerd-c87bafa387ea960c077e8fbac958410938e184e23377f189c71a34727db901a9.scope - libcontainer container c87bafa387ea960c077e8fbac958410938e184e23377f189c71a34727db901a9. Jan 17 12:10:00.749907 containerd[1550]: time="2025-01-17T12:10:00.749741880Z" level=info msg="StartContainer for \"c87bafa387ea960c077e8fbac958410938e184e23377f189c71a34727db901a9\" returns successfully" Jan 17 12:10:01.231299 containerd[1550]: time="2025-01-17T12:10:01.231052738Z" level=info msg="StopPodSandbox for \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\"" Jan 17 12:10:01.242032 containerd[1550]: time="2025-01-17T12:10:01.241876180Z" level=info msg="StopPodSandbox for \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\"" Jan 17 12:10:01.257113 kubelet[2790]: I0117 12:10:01.257088 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.277 [INFO][4317] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.277 [INFO][4317] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" iface="eth0" netns="/var/run/netns/cni-450583b6-d89e-91ce-3816-73f14eb3bae6" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.277 [INFO][4317] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" iface="eth0" netns="/var/run/netns/cni-450583b6-d89e-91ce-3816-73f14eb3bae6" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.277 [INFO][4317] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" iface="eth0" netns="/var/run/netns/cni-450583b6-d89e-91ce-3816-73f14eb3bae6" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.277 [INFO][4317] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.277 [INFO][4317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.307 [INFO][4340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" HandleID="k8s-pod-network.674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.307 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.307 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.317 [WARNING][4340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" HandleID="k8s-pod-network.674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.317 [INFO][4340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" HandleID="k8s-pod-network.674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.326 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:01.328871 containerd[1550]: 2025-01-17 12:10:01.327 [INFO][4317] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:01.329610 containerd[1550]: time="2025-01-17T12:10:01.329080656Z" level=info msg="TearDown network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\" successfully" Jan 17 12:10:01.329610 containerd[1550]: time="2025-01-17T12:10:01.329097273Z" level=info msg="StopPodSandbox for \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\" returns successfully" Jan 17 12:10:01.329610 containerd[1550]: time="2025-01-17T12:10:01.329546472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w95gg,Uid:e85f6860-3f73-4ce2-9930-90bfeb23aed3,Namespace:calico-system,Attempt:1,}" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.295 [INFO][4316] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.296 [INFO][4316] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" iface="eth0" netns="/var/run/netns/cni-466787b0-89e8-8d85-84af-389fb73477dd" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.297 [INFO][4316] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" iface="eth0" netns="/var/run/netns/cni-466787b0-89e8-8d85-84af-389fb73477dd" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.297 [INFO][4316] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" iface="eth0" netns="/var/run/netns/cni-466787b0-89e8-8d85-84af-389fb73477dd" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.297 [INFO][4316] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.297 [INFO][4316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.326 [INFO][4344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" HandleID="k8s-pod-network.82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.326 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.327 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.332 [WARNING][4344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" HandleID="k8s-pod-network.82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.332 [INFO][4344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" HandleID="k8s-pod-network.82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.334 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:01.337776 containerd[1550]: 2025-01-17 12:10:01.336 [INFO][4316] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:01.339159 containerd[1550]: time="2025-01-17T12:10:01.337896652Z" level=info msg="TearDown network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\" successfully" Jan 17 12:10:01.339159 containerd[1550]: time="2025-01-17T12:10:01.337910820Z" level=info msg="StopPodSandbox for \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\" returns successfully" Jan 17 12:10:01.339159 containerd[1550]: time="2025-01-17T12:10:01.338941522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c6c8b9b4-th4kr,Uid:6c968e21-0471-4da2-80c9-10bfe6dba99c,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:10:01.458907 systemd-networkd[1473]: cali1d6abab11ea: Link UP Jan 17 12:10:01.459571 systemd-networkd[1473]: cali1d6abab11ea: Gained carrier Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.368 [INFO][4360] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.400 [INFO][4360] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0 calico-apiserver-67c6c8b9b4- calico-apiserver 6c968e21-0471-4da2-80c9-10bfe6dba99c 752 0 2025-01-17 12:09:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67c6c8b9b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67c6c8b9b4-th4kr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d6abab11ea [] []}} ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-th4kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.400 [INFO][4360] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-th4kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.429 [INFO][4377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" HandleID="k8s-pod-network.28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.439 [INFO][4377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" HandleID="k8s-pod-network.28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67c6c8b9b4-th4kr", "timestamp":"2025-01-17 12:10:01.429533325 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.439 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.439 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.439 [INFO][4377] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.440 [INFO][4377] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" host="localhost" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.442 [INFO][4377] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.445 [INFO][4377] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.446 [INFO][4377] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.447 [INFO][4377] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.447 [INFO][4377] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" host="localhost" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.448 [INFO][4377] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755 Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.450 [INFO][4377] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" host="localhost" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.454 [INFO][4377] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" host="localhost" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.454 [INFO][4377] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" host="localhost" Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.454 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:01.473466 containerd[1550]: 2025-01-17 12:10:01.454 [INFO][4377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" HandleID="k8s-pod-network.28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.479546 containerd[1550]: 2025-01-17 12:10:01.455 [INFO][4360] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-th4kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0", GenerateName:"calico-apiserver-67c6c8b9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c968e21-0471-4da2-80c9-10bfe6dba99c", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c6c8b9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67c6c8b9b4-th4kr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d6abab11ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:01.479546 containerd[1550]: 2025-01-17 12:10:01.455 [INFO][4360] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-th4kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.479546 containerd[1550]: 2025-01-17 12:10:01.455 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d6abab11ea ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-th4kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.479546 containerd[1550]: 2025-01-17 12:10:01.457 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-th4kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.479546 containerd[1550]: 2025-01-17 12:10:01.457 [INFO][4360] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-th4kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0", GenerateName:"calico-apiserver-67c6c8b9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c968e21-0471-4da2-80c9-10bfe6dba99c", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c6c8b9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755", Pod:"calico-apiserver-67c6c8b9b4-th4kr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d6abab11ea", MAC:"4e:3f:ee:01:98:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:01.479546 containerd[1550]: 2025-01-17 12:10:01.471 [INFO][4360] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-th4kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:01.493364 systemd-networkd[1473]: calia9e7073a450: Link UP Jan 17 12:10:01.493546 systemd-networkd[1473]: calia9e7073a450: Gained carrier Jan 17 12:10:01.497588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839720568.mount: Deactivated successfully. Jan 17 12:10:01.497647 systemd[1]: run-netns-cni\x2d450583b6\x2dd89e\x2d91ce\x2d3816\x2d73f14eb3bae6.mount: Deactivated successfully. Jan 17 12:10:01.497703 systemd[1]: run-netns-cni\x2d466787b0\x2d89e8\x2d8d85\x2d84af\x2d389fb73477dd.mount: Deactivated successfully. Jan 17 12:10:01.499598 containerd[1550]: time="2025-01-17T12:10:01.497545630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:01.499598 containerd[1550]: time="2025-01-17T12:10:01.497583334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:01.502501 containerd[1550]: time="2025-01-17T12:10:01.498090738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:01.502501 containerd[1550]: time="2025-01-17T12:10:01.500770779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.374 [INFO][4355] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.402 [INFO][4355] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w95gg-eth0 csi-node-driver- calico-system e85f6860-3f73-4ce2-9930-90bfeb23aed3 751 0 2025-01-17 12:09:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w95gg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia9e7073a450 [] []}} ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Namespace="calico-system" Pod="csi-node-driver-w95gg" WorkloadEndpoint="localhost-k8s-csi--node--driver--w95gg-" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.402 [INFO][4355] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Namespace="calico-system" Pod="csi-node-driver-w95gg" WorkloadEndpoint="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.434 [INFO][4381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" HandleID="k8s-pod-network.d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.440 [INFO][4381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" HandleID="k8s-pod-network.d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039a980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w95gg", "timestamp":"2025-01-17 12:10:01.434646119 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.440 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.454 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.454 [INFO][4381] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.456 [INFO][4381] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" host="localhost" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.461 [INFO][4381] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.470 [INFO][4381] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.472 [INFO][4381] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.474 [INFO][4381] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.474 [INFO][4381] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" host="localhost" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.475 [INFO][4381] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2 Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.480 [INFO][4381] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" host="localhost" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.487 [INFO][4381] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" host="localhost" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.487 [INFO][4381] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" host="localhost" Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.487 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:01.516264 containerd[1550]: 2025-01-17 12:10:01.487 [INFO][4381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" HandleID="k8s-pod-network.d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.517559 containerd[1550]: 2025-01-17 12:10:01.488 [INFO][4355] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Namespace="calico-system" Pod="csi-node-driver-w95gg" WorkloadEndpoint="localhost-k8s-csi--node--driver--w95gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w95gg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e85f6860-3f73-4ce2-9930-90bfeb23aed3", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w95gg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9e7073a450", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:01.517559 containerd[1550]: 2025-01-17 12:10:01.489 [INFO][4355] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Namespace="calico-system" Pod="csi-node-driver-w95gg" WorkloadEndpoint="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.517559 containerd[1550]: 2025-01-17 12:10:01.489 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9e7073a450 ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Namespace="calico-system" Pod="csi-node-driver-w95gg" WorkloadEndpoint="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.517559 containerd[1550]: 2025-01-17 12:10:01.493 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Namespace="calico-system" Pod="csi-node-driver-w95gg" WorkloadEndpoint="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.517559 containerd[1550]: 2025-01-17 12:10:01.498 [INFO][4355] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Namespace="calico-system" Pod="csi-node-driver-w95gg" WorkloadEndpoint="localhost-k8s-csi--node--driver--w95gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w95gg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e85f6860-3f73-4ce2-9930-90bfeb23aed3", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2", Pod:"csi-node-driver-w95gg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9e7073a450", MAC:"62:f3:aa:f5:8f:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:01.517559 containerd[1550]: 2025-01-17 12:10:01.510 [INFO][4355] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2" Namespace="calico-system" Pod="csi-node-driver-w95gg" WorkloadEndpoint="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:01.525021 systemd[1]: run-containerd-runc-k8s.io-28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755-runc.z9sWZO.mount: Deactivated successfully. Jan 17 12:10:01.529412 systemd[1]: Started cri-containerd-28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755.scope - libcontainer container 28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755. Jan 17 12:10:01.536978 containerd[1550]: time="2025-01-17T12:10:01.536587928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:01.536978 containerd[1550]: time="2025-01-17T12:10:01.536680000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:01.536978 containerd[1550]: time="2025-01-17T12:10:01.536719522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:01.536978 containerd[1550]: time="2025-01-17T12:10:01.536858459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:01.541369 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:10:01.561420 systemd[1]: Started cri-containerd-d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2.scope - libcontainer container d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2. Jan 17 12:10:01.574183 containerd[1550]: time="2025-01-17T12:10:01.574124692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c6c8b9b4-th4kr,Uid:6c968e21-0471-4da2-80c9-10bfe6dba99c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755\"" Jan 17 12:10:01.575810 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:10:01.576756 containerd[1550]: time="2025-01-17T12:10:01.576219179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:10:01.586302 containerd[1550]: time="2025-01-17T12:10:01.586237154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w95gg,Uid:e85f6860-3f73-4ce2-9930-90bfeb23aed3,Namespace:calico-system,Attempt:1,} returns sandbox id \"d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2\"" Jan 17 12:10:01.910192 systemd-networkd[1473]: cali7de8843e776: Gained IPv6LL Jan 17 12:10:02.279302 kernel: bpftool[4518]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:10:02.531943 systemd-networkd[1473]: vxlan.calico: Link UP Jan 17 12:10:02.531948 systemd-networkd[1473]: vxlan.calico: Gained carrier Jan 17 12:10:02.587034 kubelet[2790]: I0117 12:10:02.586909 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8t4h2" podStartSLOduration=34.586894078 podStartE2EDuration="34.586894078s" podCreationTimestamp="2025-01-17 12:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:01.557879737 +0000 UTC m=+48.448794650" watchObservedRunningTime="2025-01-17 12:10:02.586894078 +0000 UTC m=+49.477808987" Jan 17 12:10:03.231852 containerd[1550]: time="2025-01-17T12:10:03.231644362Z" level=info msg="StopPodSandbox for \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\"" Jan 17 12:10:03.232918 containerd[1550]: time="2025-01-17T12:10:03.232620924Z" level=info msg="StopPodSandbox for \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\"" Jan 17 12:10:03.233421 containerd[1550]: time="2025-01-17T12:10:03.233398915Z" level=info msg="StopPodSandbox for \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\"" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.301 [INFO][4677] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.301 [INFO][4677] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" iface="eth0" netns="/var/run/netns/cni-2b7d960b-721c-fd7d-ae3b-f3ae1627c92d" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.302 [INFO][4677] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" iface="eth0" netns="/var/run/netns/cni-2b7d960b-721c-fd7d-ae3b-f3ae1627c92d" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.302 [INFO][4677] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" iface="eth0" netns="/var/run/netns/cni-2b7d960b-721c-fd7d-ae3b-f3ae1627c92d" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.302 [INFO][4677] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.302 [INFO][4677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.344 [INFO][4699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" HandleID="k8s-pod-network.69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.346 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.346 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.353 [WARNING][4699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" HandleID="k8s-pod-network.69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.353 [INFO][4699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" HandleID="k8s-pod-network.69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.354 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:03.359024 containerd[1550]: 2025-01-17 12:10:03.357 [INFO][4677] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:03.360835 systemd[1]: run-netns-cni\x2d2b7d960b\x2d721c\x2dfd7d\x2dae3b\x2df3ae1627c92d.mount: Deactivated successfully. Jan 17 12:10:03.363204 containerd[1550]: time="2025-01-17T12:10:03.363179025Z" level=info msg="TearDown network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\" successfully" Jan 17 12:10:03.363204 containerd[1550]: time="2025-01-17T12:10:03.363203700Z" level=info msg="StopPodSandbox for \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\" returns successfully" Jan 17 12:10:03.363824 containerd[1550]: time="2025-01-17T12:10:03.363778716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-974z6,Uid:8a5ae03d-0277-4872-b684-ccf00f39afa3,Namespace:kube-system,Attempt:1,}" Jan 17 12:10:03.377484 systemd-networkd[1473]: cali1d6abab11ea: Gained IPv6LL Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.312 [INFO][4682] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.312 [INFO][4682] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" iface="eth0" netns="/var/run/netns/cni-1eeb3eab-0f95-b586-6722-110552aab59a" Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.313 [INFO][4682] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" iface="eth0" netns="/var/run/netns/cni-1eeb3eab-0f95-b586-6722-110552aab59a" Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.313 [INFO][4682] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" iface="eth0" netns="/var/run/netns/cni-1eeb3eab-0f95-b586-6722-110552aab59a" Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.313 [INFO][4682] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.313 [INFO][4682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.355 [INFO][4704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" HandleID="k8s-pod-network.6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.363 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.363 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.368 [WARNING][4704] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" HandleID="k8s-pod-network.6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.368 [INFO][4704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" HandleID="k8s-pod-network.6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.371 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:03.383028 containerd[1550]: 2025-01-17 12:10:03.374 [INFO][4682] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:03.383028 containerd[1550]: time="2025-01-17T12:10:03.383016913Z" level=info msg="TearDown network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\" successfully" Jan 17 12:10:03.383629 containerd[1550]: time="2025-01-17T12:10:03.383033273Z" level=info msg="StopPodSandbox for \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\" returns successfully" Jan 17 12:10:03.383442 systemd[1]: run-netns-cni\x2d1eeb3eab\x2d0f95\x2db586\x2d6722\x2d110552aab59a.mount: Deactivated successfully. Jan 17 12:10:03.384882 containerd[1550]: time="2025-01-17T12:10:03.384866894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-786d6d6cf9-nwzjn,Uid:fff26461-98e7-492f-8649-772ea0be885d,Namespace:calico-system,Attempt:1,}" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.328 [INFO][4681] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.328 [INFO][4681] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" iface="eth0" netns="/var/run/netns/cni-afe9dca1-3d63-f8c0-3c36-221d8c224ecd" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.328 [INFO][4681] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" iface="eth0" netns="/var/run/netns/cni-afe9dca1-3d63-f8c0-3c36-221d8c224ecd" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.328 [INFO][4681] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" iface="eth0" netns="/var/run/netns/cni-afe9dca1-3d63-f8c0-3c36-221d8c224ecd" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.328 [INFO][4681] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.328 [INFO][4681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.393 [INFO][4709] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" HandleID="k8s-pod-network.2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.393 [INFO][4709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.393 [INFO][4709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.397 [WARNING][4709] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" HandleID="k8s-pod-network.2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.397 [INFO][4709] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" HandleID="k8s-pod-network.2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.398 [INFO][4709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:03.402008 containerd[1550]: 2025-01-17 12:10:03.400 [INFO][4681] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:03.403021 containerd[1550]: time="2025-01-17T12:10:03.402100675Z" level=info msg="TearDown network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\" successfully" Jan 17 12:10:03.403021 containerd[1550]: time="2025-01-17T12:10:03.402115488Z" level=info msg="StopPodSandbox for \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\" returns successfully" Jan 17 12:10:03.410049 containerd[1550]: time="2025-01-17T12:10:03.410005171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c6c8b9b4-svk24,Uid:e52c0d81-2873-48ab-812d-7481f2bdd74f,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:10:03.440610 systemd-networkd[1473]: calia9e7073a450: Gained IPv6LL Jan 17 12:10:03.495143 systemd[1]: run-netns-cni\x2dafe9dca1\x2d3d63\x2df8c0\x2d3c36\x2d221d8c224ecd.mount: Deactivated successfully. Jan 17 12:10:03.497205 systemd-networkd[1473]: cali3502c9ac29d: Link UP Jan 17 12:10:03.497397 systemd-networkd[1473]: cali3502c9ac29d: Gained carrier Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.416 [INFO][4716] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--974z6-eth0 coredns-7db6d8ff4d- kube-system 8a5ae03d-0277-4872-b684-ccf00f39afa3 783 0 2025-01-17 12:09:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-974z6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3502c9ac29d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Namespace="kube-system" Pod="coredns-7db6d8ff4d-974z6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--974z6-" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.416 [INFO][4716] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Namespace="kube-system" Pod="coredns-7db6d8ff4d-974z6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.445 [INFO][4743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" HandleID="k8s-pod-network.3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.456 [INFO][4743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" HandleID="k8s-pod-network.3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051550), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-974z6", "timestamp":"2025-01-17 12:10:03.445387279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.456 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.456 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.456 [INFO][4743] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.457 [INFO][4743] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" host="localhost" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.459 [INFO][4743] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.463 [INFO][4743] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.465 [INFO][4743] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.468 [INFO][4743] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.468 [INFO][4743] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" host="localhost" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.469 [INFO][4743] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081 Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.476 [INFO][4743] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" host="localhost" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.485 [INFO][4743] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" host="localhost" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.485 [INFO][4743] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" host="localhost" Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.485 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:03.508975 containerd[1550]: 2025-01-17 12:10:03.485 [INFO][4743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" HandleID="k8s-pod-network.3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.512304 containerd[1550]: 2025-01-17 12:10:03.487 [INFO][4716] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Namespace="kube-system" Pod="coredns-7db6d8ff4d-974z6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--974z6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8a5ae03d-0277-4872-b684-ccf00f39afa3", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-974z6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3502c9ac29d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:03.512304 containerd[1550]: 2025-01-17 12:10:03.488 [INFO][4716] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Namespace="kube-system" Pod="coredns-7db6d8ff4d-974z6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.512304 containerd[1550]: 2025-01-17 12:10:03.488 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3502c9ac29d ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Namespace="kube-system" Pod="coredns-7db6d8ff4d-974z6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.512304 containerd[1550]: 2025-01-17 12:10:03.489 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Namespace="kube-system" Pod="coredns-7db6d8ff4d-974z6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.512304 containerd[1550]: 2025-01-17 12:10:03.490 [INFO][4716] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Namespace="kube-system" Pod="coredns-7db6d8ff4d-974z6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--974z6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8a5ae03d-0277-4872-b684-ccf00f39afa3", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081", Pod:"coredns-7db6d8ff4d-974z6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3502c9ac29d", MAC:"de:de:a2:28:75:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:03.512304 containerd[1550]: 2025-01-17 12:10:03.506 [INFO][4716] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081" Namespace="kube-system" Pod="coredns-7db6d8ff4d-974z6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:03.541210 containerd[1550]: time="2025-01-17T12:10:03.539919454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:03.541210 containerd[1550]: time="2025-01-17T12:10:03.541165774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:03.541390 containerd[1550]: time="2025-01-17T12:10:03.541299509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.541461 containerd[1550]: time="2025-01-17T12:10:03.541373361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.545895 systemd-networkd[1473]: cali7941ad7dc40: Link UP Jan 17 12:10:03.547156 systemd-networkd[1473]: cali7941ad7dc40: Gained carrier Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.440 [INFO][4728] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0 calico-kube-controllers-786d6d6cf9- calico-system fff26461-98e7-492f-8649-772ea0be885d 784 0 2025-01-17 12:09:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:786d6d6cf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-786d6d6cf9-nwzjn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7941ad7dc40 [] []}} ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Namespace="calico-system" Pod="calico-kube-controllers-786d6d6cf9-nwzjn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.441 [INFO][4728] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Namespace="calico-system" Pod="calico-kube-controllers-786d6d6cf9-nwzjn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.483 [INFO][4761] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" HandleID="k8s-pod-network.f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.502 [INFO][4761] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" HandleID="k8s-pod-network.f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-786d6d6cf9-nwzjn", "timestamp":"2025-01-17 12:10:03.483664308 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.503 [INFO][4761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.503 [INFO][4761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.503 [INFO][4761] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.508 [INFO][4761] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" host="localhost" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.515 [INFO][4761] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.520 [INFO][4761] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.522 [INFO][4761] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.523 [INFO][4761] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.523 [INFO][4761] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" host="localhost" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.524 [INFO][4761] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969 Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.531 [INFO][4761] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" host="localhost" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.540 [INFO][4761] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" host="localhost" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.540 [INFO][4761] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" host="localhost" Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.540 [INFO][4761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:03.565590 containerd[1550]: 2025-01-17 12:10:03.540 [INFO][4761] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" HandleID="k8s-pod-network.f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.567627 containerd[1550]: 2025-01-17 12:10:03.542 [INFO][4728] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Namespace="calico-system" Pod="calico-kube-controllers-786d6d6cf9-nwzjn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0", GenerateName:"calico-kube-controllers-786d6d6cf9-", Namespace:"calico-system", SelfLink:"", UID:"fff26461-98e7-492f-8649-772ea0be885d", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"786d6d6cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-786d6d6cf9-nwzjn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7941ad7dc40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:03.567627 containerd[1550]: 2025-01-17 12:10:03.542 [INFO][4728] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Namespace="calico-system" Pod="calico-kube-controllers-786d6d6cf9-nwzjn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.567627 containerd[1550]: 2025-01-17 12:10:03.542 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7941ad7dc40 ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Namespace="calico-system" Pod="calico-kube-controllers-786d6d6cf9-nwzjn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.567627 containerd[1550]: 2025-01-17 12:10:03.547 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Namespace="calico-system" Pod="calico-kube-controllers-786d6d6cf9-nwzjn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.567627 containerd[1550]: 2025-01-17 12:10:03.550 [INFO][4728] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Namespace="calico-system" Pod="calico-kube-controllers-786d6d6cf9-nwzjn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0", GenerateName:"calico-kube-controllers-786d6d6cf9-", Namespace:"calico-system", SelfLink:"", UID:"fff26461-98e7-492f-8649-772ea0be885d", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"786d6d6cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969", Pod:"calico-kube-controllers-786d6d6cf9-nwzjn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7941ad7dc40", MAC:"0a:43:51:a2:d3:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:03.567627 containerd[1550]: 2025-01-17 12:10:03.561 [INFO][4728] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969" Namespace="calico-system" Pod="calico-kube-controllers-786d6d6cf9-nwzjn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:03.577425 systemd[1]: Started cri-containerd-3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081.scope - libcontainer container 3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081. Jan 17 12:10:03.594007 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:10:03.605569 containerd[1550]: time="2025-01-17T12:10:03.605234704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:03.605569 containerd[1550]: time="2025-01-17T12:10:03.605306689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:03.605569 containerd[1550]: time="2025-01-17T12:10:03.605324460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.605569 containerd[1550]: time="2025-01-17T12:10:03.605402051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.613122 systemd-networkd[1473]: cali4929d9c1da3: Link UP Jan 17 12:10:03.613621 systemd-networkd[1473]: cali4929d9c1da3: Gained carrier Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.485 [INFO][4749] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0 calico-apiserver-67c6c8b9b4- calico-apiserver e52c0d81-2873-48ab-812d-7481f2bdd74f 785 0 2025-01-17 12:09:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67c6c8b9b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67c6c8b9b4-svk24 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4929d9c1da3 [] []}} ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-svk24" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.486 [INFO][4749] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-svk24" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.549 [INFO][4771] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" HandleID="k8s-pod-network.64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.561 [INFO][4771] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" HandleID="k8s-pod-network.64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000521590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67c6c8b9b4-svk24", "timestamp":"2025-01-17 12:10:03.549044502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.561 [INFO][4771] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.561 [INFO][4771] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.561 [INFO][4771] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.567 [INFO][4771] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" host="localhost" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.574 [INFO][4771] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.578 [INFO][4771] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.581 [INFO][4771] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.584 [INFO][4771] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.584 [INFO][4771] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" host="localhost" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.587 [INFO][4771] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.597 [INFO][4771] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" host="localhost" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.603 [INFO][4771] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" host="localhost" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.603 [INFO][4771] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" host="localhost" Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.603 [INFO][4771] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:03.628102 containerd[1550]: 2025-01-17 12:10:03.603 [INFO][4771] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" HandleID="k8s-pod-network.64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.629122 containerd[1550]: 2025-01-17 12:10:03.607 [INFO][4749] cni-plugin/k8s.go 386: Populated endpoint ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-svk24" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0", GenerateName:"calico-apiserver-67c6c8b9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e52c0d81-2873-48ab-812d-7481f2bdd74f", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c6c8b9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67c6c8b9b4-svk24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4929d9c1da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:03.629122 containerd[1550]: 2025-01-17 12:10:03.607 [INFO][4749] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-svk24" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.629122 containerd[1550]: 2025-01-17 12:10:03.607 [INFO][4749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4929d9c1da3 ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-svk24" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.629122 containerd[1550]: 2025-01-17 12:10:03.614 [INFO][4749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-svk24" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.629122 containerd[1550]: 2025-01-17 12:10:03.615 [INFO][4749] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-svk24" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0", GenerateName:"calico-apiserver-67c6c8b9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e52c0d81-2873-48ab-812d-7481f2bdd74f", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c6c8b9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b", Pod:"calico-apiserver-67c6c8b9b4-svk24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4929d9c1da3", MAC:"b6:6a:15:54:a5:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:03.629122 containerd[1550]: 2025-01-17 12:10:03.624 [INFO][4749] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b" Namespace="calico-apiserver" Pod="calico-apiserver-67c6c8b9b4-svk24" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:03.645622 systemd[1]: Started cri-containerd-f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969.scope - libcontainer container f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969. Jan 17 12:10:03.665537 containerd[1550]: time="2025-01-17T12:10:03.665143944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:03.665537 containerd[1550]: time="2025-01-17T12:10:03.665513760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:03.666060 containerd[1550]: time="2025-01-17T12:10:03.665871020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.666428 containerd[1550]: time="2025-01-17T12:10:03.666360926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:03.668828 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:10:03.669192 containerd[1550]: time="2025-01-17T12:10:03.668965764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-974z6,Uid:8a5ae03d-0277-4872-b684-ccf00f39afa3,Namespace:kube-system,Attempt:1,} returns sandbox id \"3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081\"" Jan 17 12:10:03.672816 containerd[1550]: time="2025-01-17T12:10:03.672620819Z" level=info msg="CreateContainer within sandbox \"3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:10:03.686441 containerd[1550]: time="2025-01-17T12:10:03.686413292Z" level=info msg="CreateContainer within sandbox \"3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"438836a4148c4dcd5afd5b23f87f4e10b27263cb80dfa00a523ad29e7c1a437b\"" Jan 17 12:10:03.687396 systemd[1]: Started cri-containerd-64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b.scope - libcontainer container 64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b. Jan 17 12:10:03.691593 containerd[1550]: time="2025-01-17T12:10:03.691527920Z" level=info msg="StartContainer for \"438836a4148c4dcd5afd5b23f87f4e10b27263cb80dfa00a523ad29e7c1a437b\"" Jan 17 12:10:03.705700 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:10:03.721403 systemd[1]: Started cri-containerd-438836a4148c4dcd5afd5b23f87f4e10b27263cb80dfa00a523ad29e7c1a437b.scope - libcontainer container 438836a4148c4dcd5afd5b23f87f4e10b27263cb80dfa00a523ad29e7c1a437b. Jan 17 12:10:03.727532 containerd[1550]: time="2025-01-17T12:10:03.727343259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-786d6d6cf9-nwzjn,Uid:fff26461-98e7-492f-8649-772ea0be885d,Namespace:calico-system,Attempt:1,} returns sandbox id \"f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969\"" Jan 17 12:10:03.764773 containerd[1550]: time="2025-01-17T12:10:03.764745804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c6c8b9b4-svk24,Uid:e52c0d81-2873-48ab-812d-7481f2bdd74f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b\"" Jan 17 12:10:03.771933 containerd[1550]: time="2025-01-17T12:10:03.771911757Z" level=info msg="StartContainer for \"438836a4148c4dcd5afd5b23f87f4e10b27263cb80dfa00a523ad29e7c1a437b\" returns successfully" Jan 17 12:10:04.336443 systemd-networkd[1473]: vxlan.calico: Gained IPv6LL Jan 17 12:10:04.586816 kubelet[2790]: I0117 12:10:04.586245 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-974z6" podStartSLOduration=36.586227332 podStartE2EDuration="36.586227332s" podCreationTimestamp="2025-01-17 12:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:04.573583812 +0000 UTC m=+51.464498728" watchObservedRunningTime="2025-01-17 12:10:04.586227332 +0000 UTC m=+51.477142240" Jan 17 12:10:04.592448 systemd-networkd[1473]: cali7941ad7dc40: Gained IPv6LL Jan 17 12:10:04.848680 systemd-networkd[1473]: cali4929d9c1da3: Gained IPv6LL Jan 17 12:10:05.360690 systemd-networkd[1473]: cali3502c9ac29d: Gained IPv6LL Jan 17 12:10:05.390229 containerd[1550]: time="2025-01-17T12:10:05.389753306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:05.393103 containerd[1550]: time="2025-01-17T12:10:05.393071436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:10:05.396337 containerd[1550]: time="2025-01-17T12:10:05.394231610Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:05.396337 containerd[1550]: time="2025-01-17T12:10:05.395667728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:05.396337 containerd[1550]: time="2025-01-17T12:10:05.396264705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.820009298s" Jan 17 12:10:05.396497 containerd[1550]: time="2025-01-17T12:10:05.396419845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:10:05.400995 containerd[1550]: time="2025-01-17T12:10:05.400955772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:10:05.401987 containerd[1550]: time="2025-01-17T12:10:05.401914661Z" level=info msg="CreateContainer within sandbox \"28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:10:05.408768 containerd[1550]: time="2025-01-17T12:10:05.408744888Z" level=info msg="CreateContainer within sandbox \"28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec6606e5841e90be2647e8fecf9901caacdb89c154a64d1e4d6c0fba8a5fd769\"" Jan 17 12:10:05.410556 containerd[1550]: time="2025-01-17T12:10:05.409672243Z" level=info msg="StartContainer for \"ec6606e5841e90be2647e8fecf9901caacdb89c154a64d1e4d6c0fba8a5fd769\"" Jan 17 12:10:05.448425 systemd[1]: Started cri-containerd-ec6606e5841e90be2647e8fecf9901caacdb89c154a64d1e4d6c0fba8a5fd769.scope - libcontainer container ec6606e5841e90be2647e8fecf9901caacdb89c154a64d1e4d6c0fba8a5fd769. Jan 17 12:10:05.493106 containerd[1550]: time="2025-01-17T12:10:05.493074273Z" level=info msg="StartContainer for \"ec6606e5841e90be2647e8fecf9901caacdb89c154a64d1e4d6c0fba8a5fd769\" returns successfully" Jan 17 12:10:05.578917 kubelet[2790]: I0117 12:10:05.578856 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-th4kr" podStartSLOduration=25.753766542 podStartE2EDuration="29.578840703s" podCreationTimestamp="2025-01-17 12:09:36 +0000 UTC" firstStartedPulling="2025-01-17 12:10:01.575569438 +0000 UTC m=+48.466484347" lastFinishedPulling="2025-01-17 12:10:05.400643604 +0000 UTC m=+52.291558508" observedRunningTime="2025-01-17 12:10:05.576188521 +0000 UTC m=+52.467103435" watchObservedRunningTime="2025-01-17 12:10:05.578840703 +0000 UTC m=+52.469755619" Jan 17 12:10:06.570392 kubelet[2790]: I0117 12:10:06.570361 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:10:07.572818 containerd[1550]: time="2025-01-17T12:10:07.572441698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:07.576514 containerd[1550]: time="2025-01-17T12:10:07.576494389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:10:07.581129 containerd[1550]: time="2025-01-17T12:10:07.581117659Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:07.586315 containerd[1550]: time="2025-01-17T12:10:07.586301017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:07.586583 containerd[1550]: time="2025-01-17T12:10:07.586566140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.185590022s" Jan 17 12:10:07.586618 containerd[1550]: time="2025-01-17T12:10:07.586584089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:10:07.587186 containerd[1550]: time="2025-01-17T12:10:07.587171139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:10:07.628685 containerd[1550]: time="2025-01-17T12:10:07.628664521Z" level=info msg="CreateContainer within sandbox \"d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:10:07.657746 containerd[1550]: time="2025-01-17T12:10:07.657726039Z" level=info msg="CreateContainer within sandbox \"d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"91d3e064ade6311e0b941bc24e4fa2e32e73d874f7a0870f693a6770859cc51c\"" Jan 17 12:10:07.659025 containerd[1550]: time="2025-01-17T12:10:07.658851574Z" level=info msg="StartContainer for \"91d3e064ade6311e0b941bc24e4fa2e32e73d874f7a0870f693a6770859cc51c\"" Jan 17 12:10:07.677368 systemd[1]: Started cri-containerd-91d3e064ade6311e0b941bc24e4fa2e32e73d874f7a0870f693a6770859cc51c.scope - libcontainer container 91d3e064ade6311e0b941bc24e4fa2e32e73d874f7a0870f693a6770859cc51c. Jan 17 12:10:07.697700 containerd[1550]: time="2025-01-17T12:10:07.697650978Z" level=info msg="StartContainer for \"91d3e064ade6311e0b941bc24e4fa2e32e73d874f7a0870f693a6770859cc51c\" returns successfully" Jan 17 12:10:10.770540 containerd[1550]: time="2025-01-17T12:10:10.770121956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:10.772407 containerd[1550]: time="2025-01-17T12:10:10.772380403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:10:10.778218 containerd[1550]: time="2025-01-17T12:10:10.778050433Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:10.784559 containerd[1550]: time="2025-01-17T12:10:10.784218118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:10.784632 containerd[1550]: time="2025-01-17T12:10:10.784539688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.197350326s" Jan 17 12:10:10.784660 containerd[1550]: time="2025-01-17T12:10:10.784634461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:10:10.785616 containerd[1550]: time="2025-01-17T12:10:10.785595890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:10:10.811397 containerd[1550]: time="2025-01-17T12:10:10.811252977Z" level=info msg="CreateContainer within sandbox \"f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:10:10.822120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951970468.mount: Deactivated successfully. Jan 17 12:10:10.824237 containerd[1550]: time="2025-01-17T12:10:10.824117443Z" level=info msg="CreateContainer within sandbox \"f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6f4a7d40f648a73b83c991e28c04901cf5332790ae9e3fd7c0ffa17231c468d4\"" Jan 17 12:10:10.825264 containerd[1550]: time="2025-01-17T12:10:10.825243401Z" level=info msg="StartContainer for \"6f4a7d40f648a73b83c991e28c04901cf5332790ae9e3fd7c0ffa17231c468d4\"" Jan 17 12:10:10.851543 systemd[1]: Started cri-containerd-6f4a7d40f648a73b83c991e28c04901cf5332790ae9e3fd7c0ffa17231c468d4.scope - libcontainer container 6f4a7d40f648a73b83c991e28c04901cf5332790ae9e3fd7c0ffa17231c468d4. Jan 17 12:10:10.886243 containerd[1550]: time="2025-01-17T12:10:10.886159368Z" level=info msg="StartContainer for \"6f4a7d40f648a73b83c991e28c04901cf5332790ae9e3fd7c0ffa17231c468d4\" returns successfully" Jan 17 12:10:11.203194 containerd[1550]: time="2025-01-17T12:10:11.202754513Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:11.203194 containerd[1550]: time="2025-01-17T12:10:11.203070645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:10:11.205024 containerd[1550]: time="2025-01-17T12:10:11.205004790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 419.385549ms" Jan 17 12:10:11.205073 containerd[1550]: time="2025-01-17T12:10:11.205025370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:10:11.205714 containerd[1550]: time="2025-01-17T12:10:11.205627329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:10:11.207622 containerd[1550]: time="2025-01-17T12:10:11.207595777Z" level=info msg="CreateContainer within sandbox \"64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:10:11.229311 containerd[1550]: time="2025-01-17T12:10:11.229275410Z" level=info msg="CreateContainer within sandbox \"64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d423d6ba44ef1d678b02c31f6987e639944a856501b2ea73a03c39b6030979cd\"" Jan 17 12:10:11.230245 containerd[1550]: time="2025-01-17T12:10:11.230051745Z" level=info msg="StartContainer for \"d423d6ba44ef1d678b02c31f6987e639944a856501b2ea73a03c39b6030979cd\"" Jan 17 12:10:11.249364 systemd[1]: Started cri-containerd-d423d6ba44ef1d678b02c31f6987e639944a856501b2ea73a03c39b6030979cd.scope - libcontainer container d423d6ba44ef1d678b02c31f6987e639944a856501b2ea73a03c39b6030979cd. Jan 17 12:10:11.277337 containerd[1550]: time="2025-01-17T12:10:11.277255520Z" level=info msg="StartContainer for \"d423d6ba44ef1d678b02c31f6987e639944a856501b2ea73a03c39b6030979cd\" returns successfully" Jan 17 12:10:11.596940 kubelet[2790]: I0117 12:10:11.596878 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-786d6d6cf9-nwzjn" podStartSLOduration=28.540197481 podStartE2EDuration="35.596868094s" podCreationTimestamp="2025-01-17 12:09:36 +0000 UTC" firstStartedPulling="2025-01-17 12:10:03.728786111 +0000 UTC m=+50.619701015" lastFinishedPulling="2025-01-17 12:10:10.785456722 +0000 UTC m=+57.676371628" observedRunningTime="2025-01-17 12:10:11.596269585 +0000 UTC m=+58.487184498" watchObservedRunningTime="2025-01-17 12:10:11.596868094 +0000 UTC m=+58.487783002" Jan 17 12:10:11.603749 kubelet[2790]: I0117 12:10:11.603620 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67c6c8b9b4-svk24" podStartSLOduration=28.209272947 podStartE2EDuration="35.603609142s" podCreationTimestamp="2025-01-17 12:09:36 +0000 UTC" firstStartedPulling="2025-01-17 12:10:03.811152753 +0000 UTC m=+50.702067657" lastFinishedPulling="2025-01-17 12:10:11.205488948 +0000 UTC m=+58.096403852" observedRunningTime="2025-01-17 12:10:11.602626292 +0000 UTC m=+58.493541205" watchObservedRunningTime="2025-01-17 12:10:11.603609142 +0000 UTC m=+58.494524056" Jan 17 12:10:12.591334 kubelet[2790]: I0117 12:10:12.590953 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:10:13.305641 containerd[1550]: time="2025-01-17T12:10:13.305487193Z" level=info msg="StopPodSandbox for \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\"" Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.400 [WARNING][5201] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0", GenerateName:"calico-apiserver-67c6c8b9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e52c0d81-2873-48ab-812d-7481f2bdd74f", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c6c8b9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b", Pod:"calico-apiserver-67c6c8b9b4-svk24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4929d9c1da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.402 [INFO][5201] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.402 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" iface="eth0" netns="" Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.402 [INFO][5201] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.402 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.421 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" HandleID="k8s-pod-network.2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.421 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.421 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.430 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" HandleID="k8s-pod-network.2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.430 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" HandleID="k8s-pod-network.2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.431 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.434208 containerd[1550]: 2025-01-17 12:10:13.432 [INFO][5201] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:13.434208 containerd[1550]: time="2025-01-17T12:10:13.434202331Z" level=info msg="TearDown network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\" successfully" Jan 17 12:10:13.436218 containerd[1550]: time="2025-01-17T12:10:13.434219366Z" level=info msg="StopPodSandbox for \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\" returns successfully" Jan 17 12:10:13.471137 containerd[1550]: time="2025-01-17T12:10:13.471106935Z" level=info msg="RemovePodSandbox for \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\"" Jan 17 12:10:13.473498 containerd[1550]: time="2025-01-17T12:10:13.473453162Z" level=info msg="Forcibly stopping sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\"" Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.499 [WARNING][5225] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0", GenerateName:"calico-apiserver-67c6c8b9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"e52c0d81-2873-48ab-812d-7481f2bdd74f", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c6c8b9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64b59e37a1afdefeb9574fbd2afdf9a0a680c170a4fb9d4ebf87c68f9c28195b", Pod:"calico-apiserver-67c6c8b9b4-svk24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4929d9c1da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.500 [INFO][5225] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.500 [INFO][5225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" iface="eth0" netns="" Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.500 [INFO][5225] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.500 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.516 [INFO][5231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" HandleID="k8s-pod-network.2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.516 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.516 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.521 [WARNING][5231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" HandleID="k8s-pod-network.2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.521 [INFO][5231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" HandleID="k8s-pod-network.2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--svk24-eth0" Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.522 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.525020 containerd[1550]: 2025-01-17 12:10:13.523 [INFO][5225] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be" Jan 17 12:10:13.525020 containerd[1550]: time="2025-01-17T12:10:13.524995546Z" level=info msg="TearDown network for sandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\" successfully" Jan 17 12:10:13.530223 containerd[1550]: time="2025-01-17T12:10:13.530077841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:10:13.535555 containerd[1550]: time="2025-01-17T12:10:13.535462567Z" level=info msg="RemovePodSandbox \"2d55f46dd25e61d02ce4960a4973d81bc560befc626f57952cc726fa35ba18be\" returns successfully" Jan 17 12:10:13.541749 containerd[1550]: time="2025-01-17T12:10:13.541727791Z" level=info msg="StopPodSandbox for \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\"" Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.566 [WARNING][5250] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0", GenerateName:"calico-apiserver-67c6c8b9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c968e21-0471-4da2-80c9-10bfe6dba99c", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c6c8b9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755", Pod:"calico-apiserver-67c6c8b9b4-th4kr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d6abab11ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.566 [INFO][5250] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.566 [INFO][5250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" iface="eth0" netns="" Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.566 [INFO][5250] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.566 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.580 [INFO][5256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" HandleID="k8s-pod-network.82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.580 [INFO][5256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.580 [INFO][5256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.583 [WARNING][5256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" HandleID="k8s-pod-network.82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.583 [INFO][5256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" HandleID="k8s-pod-network.82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.584 [INFO][5256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.586209 containerd[1550]: 2025-01-17 12:10:13.585 [INFO][5250] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:13.586209 containerd[1550]: time="2025-01-17T12:10:13.586195940Z" level=info msg="TearDown network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\" successfully" Jan 17 12:10:13.586209 containerd[1550]: time="2025-01-17T12:10:13.586210430Z" level=info msg="StopPodSandbox for \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\" returns successfully" Jan 17 12:10:13.588014 containerd[1550]: time="2025-01-17T12:10:13.586886059Z" level=info msg="RemovePodSandbox for \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\"" Jan 17 12:10:13.588014 containerd[1550]: time="2025-01-17T12:10:13.586901434Z" level=info msg="Forcibly stopping sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\"" Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.610 [WARNING][5274] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0", GenerateName:"calico-apiserver-67c6c8b9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c968e21-0471-4da2-80c9-10bfe6dba99c", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c6c8b9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28916253efe1d907dfe16ce9d5be7397668935ce62ef1e6e5a4c520e876ea755", Pod:"calico-apiserver-67c6c8b9b4-th4kr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d6abab11ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.610 [INFO][5274] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.610 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" iface="eth0" netns="" Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.610 [INFO][5274] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.610 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.624 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" HandleID="k8s-pod-network.82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.625 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.625 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.629 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" HandleID="k8s-pod-network.82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.629 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" HandleID="k8s-pod-network.82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Workload="localhost-k8s-calico--apiserver--67c6c8b9b4--th4kr-eth0" Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.630 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.632087 containerd[1550]: 2025-01-17 12:10:13.631 [INFO][5274] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8" Jan 17 12:10:13.633517 containerd[1550]: time="2025-01-17T12:10:13.632344242Z" level=info msg="TearDown network for sandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\" successfully" Jan 17 12:10:13.633517 containerd[1550]: time="2025-01-17T12:10:13.633510155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:10:13.633577 containerd[1550]: time="2025-01-17T12:10:13.633538185Z" level=info msg="RemovePodSandbox \"82c27be1be8dc84056021114ae62450770141b9d2c8c73d61057aeae5e3c7bb8\" returns successfully" Jan 17 12:10:13.633894 containerd[1550]: time="2025-01-17T12:10:13.633883900Z" level=info msg="StopPodSandbox for \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\"" Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.661 [WARNING][5299] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"63fda226-0d46-4fc7-a365-e22d4880c1d7", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f", Pod:"coredns-7db6d8ff4d-8t4h2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7de8843e776", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.661 [INFO][5299] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.661 [INFO][5299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" iface="eth0" netns="" Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.661 [INFO][5299] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.661 [INFO][5299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.686 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" HandleID="k8s-pod-network.d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.686 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.686 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.695 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" HandleID="k8s-pod-network.d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.695 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" HandleID="k8s-pod-network.d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.696 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.699200 containerd[1550]: 2025-01-17 12:10:13.697 [INFO][5299] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:13.699200 containerd[1550]: time="2025-01-17T12:10:13.699096368Z" level=info msg="TearDown network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\" successfully" Jan 17 12:10:13.699200 containerd[1550]: time="2025-01-17T12:10:13.699115478Z" level=info msg="StopPodSandbox for \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\" returns successfully" Jan 17 12:10:13.701224 containerd[1550]: time="2025-01-17T12:10:13.699935012Z" level=info msg="RemovePodSandbox for \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\"" Jan 17 12:10:13.701224 containerd[1550]: time="2025-01-17T12:10:13.699952086Z" level=info msg="Forcibly stopping sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\"" Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.741 [WARNING][5350] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"63fda226-0d46-4fc7-a365-e22d4880c1d7", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ea6ca6723f71cd4e03b64e826773289a495878171c462ea45be96fadf63972f", Pod:"coredns-7db6d8ff4d-8t4h2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7de8843e776", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.742 [INFO][5350] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.742 [INFO][5350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" iface="eth0" netns="" Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.742 [INFO][5350] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.742 [INFO][5350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.762 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" HandleID="k8s-pod-network.d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.762 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.762 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.767 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" HandleID="k8s-pod-network.d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.767 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" HandleID="k8s-pod-network.d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Workload="localhost-k8s-coredns--7db6d8ff4d--8t4h2-eth0" Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.769 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.771219 containerd[1550]: 2025-01-17 12:10:13.770 [INFO][5350] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed" Jan 17 12:10:13.771219 containerd[1550]: time="2025-01-17T12:10:13.771211507Z" level=info msg="TearDown network for sandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\" successfully" Jan 17 12:10:13.772622 containerd[1550]: time="2025-01-17T12:10:13.772602488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:10:13.772684 containerd[1550]: time="2025-01-17T12:10:13.772639630Z" level=info msg="RemovePodSandbox \"d729c1b827573df4b852f8bbd98d47ccb80e272688b326b16f17420852db43ed\" returns successfully" Jan 17 12:10:13.772949 containerd[1550]: time="2025-01-17T12:10:13.772925158Z" level=info msg="StopPodSandbox for \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\"" Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.795 [WARNING][5375] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--974z6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8a5ae03d-0277-4872-b684-ccf00f39afa3", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081", Pod:"coredns-7db6d8ff4d-974z6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3502c9ac29d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.795 [INFO][5375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.795 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" iface="eth0" netns="" Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.795 [INFO][5375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.795 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.810 [INFO][5381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" HandleID="k8s-pod-network.69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.810 [INFO][5381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.810 [INFO][5381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.815 [WARNING][5381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" HandleID="k8s-pod-network.69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.815 [INFO][5381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" HandleID="k8s-pod-network.69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.816 [INFO][5381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.818874 containerd[1550]: 2025-01-17 12:10:13.817 [INFO][5375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:13.819542 containerd[1550]: time="2025-01-17T12:10:13.818913331Z" level=info msg="TearDown network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\" successfully" Jan 17 12:10:13.819542 containerd[1550]: time="2025-01-17T12:10:13.818930395Z" level=info msg="StopPodSandbox for \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\" returns successfully" Jan 17 12:10:13.819876 containerd[1550]: time="2025-01-17T12:10:13.819630172Z" level=info msg="RemovePodSandbox for \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\"" Jan 17 12:10:13.819876 containerd[1550]: time="2025-01-17T12:10:13.819650852Z" level=info msg="Forcibly stopping sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\"" Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.846 [WARNING][5399] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--974z6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8a5ae03d-0277-4872-b684-ccf00f39afa3", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3373d123cdd6ce06ee652a1416b0bcae1536731a04bcf6324facd4ba33274081", Pod:"coredns-7db6d8ff4d-974z6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3502c9ac29d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.846 [INFO][5399] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.846 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" iface="eth0" netns="" Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.846 [INFO][5399] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.846 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.861 [INFO][5405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" HandleID="k8s-pod-network.69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.861 [INFO][5405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.861 [INFO][5405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.864 [WARNING][5405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" HandleID="k8s-pod-network.69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.865 [INFO][5405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" HandleID="k8s-pod-network.69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Workload="localhost-k8s-coredns--7db6d8ff4d--974z6-eth0" Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.865 [INFO][5405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.867898 containerd[1550]: 2025-01-17 12:10:13.866 [INFO][5399] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0" Jan 17 12:10:13.867898 containerd[1550]: time="2025-01-17T12:10:13.867875200Z" level=info msg="TearDown network for sandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\" successfully" Jan 17 12:10:13.869522 containerd[1550]: time="2025-01-17T12:10:13.869452595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:10:13.869777 containerd[1550]: time="2025-01-17T12:10:13.869535742Z" level=info msg="RemovePodSandbox \"69c154818fd92f2d94ce40091b59eed3f752bbdb8e7f04d90fdb6585482c7ea0\" returns successfully" Jan 17 12:10:13.870162 containerd[1550]: time="2025-01-17T12:10:13.870034714Z" level=info msg="StopPodSandbox for \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\"" Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.893 [WARNING][5424] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0", GenerateName:"calico-kube-controllers-786d6d6cf9-", Namespace:"calico-system", SelfLink:"", UID:"fff26461-98e7-492f-8649-772ea0be885d", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"786d6d6cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969", Pod:"calico-kube-controllers-786d6d6cf9-nwzjn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7941ad7dc40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.894 [INFO][5424] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.894 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" iface="eth0" netns="" Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.894 [INFO][5424] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.894 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.910 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" HandleID="k8s-pod-network.6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.910 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.910 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.914 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" HandleID="k8s-pod-network.6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.914 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" HandleID="k8s-pod-network.6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.914 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.916721 containerd[1550]: 2025-01-17 12:10:13.915 [INFO][5424] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:13.916721 containerd[1550]: time="2025-01-17T12:10:13.916691706Z" level=info msg="TearDown network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\" successfully" Jan 17 12:10:13.916721 containerd[1550]: time="2025-01-17T12:10:13.916705640Z" level=info msg="StopPodSandbox for \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\" returns successfully" Jan 17 12:10:13.918061 containerd[1550]: time="2025-01-17T12:10:13.917248790Z" level=info msg="RemovePodSandbox for \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\"" Jan 17 12:10:13.918061 containerd[1550]: time="2025-01-17T12:10:13.917265886Z" level=info msg="Forcibly stopping sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\"" Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.939 [WARNING][5450] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0", GenerateName:"calico-kube-controllers-786d6d6cf9-", Namespace:"calico-system", SelfLink:"", UID:"fff26461-98e7-492f-8649-772ea0be885d", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"786d6d6cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9b31ef4271f89e85722ea23d99de68ae294d8d018f585e4e9c3e417d1ba4969", Pod:"calico-kube-controllers-786d6d6cf9-nwzjn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7941ad7dc40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.939 [INFO][5450] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.939 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" iface="eth0" netns="" Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.939 [INFO][5450] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.939 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.954 [INFO][5456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" HandleID="k8s-pod-network.6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.954 [INFO][5456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.954 [INFO][5456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.957 [WARNING][5456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" HandleID="k8s-pod-network.6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.957 [INFO][5456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" HandleID="k8s-pod-network.6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Workload="localhost-k8s-calico--kube--controllers--786d6d6cf9--nwzjn-eth0" Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.958 [INFO][5456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:13.960554 containerd[1550]: 2025-01-17 12:10:13.959 [INFO][5450] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1" Jan 17 12:10:13.960554 containerd[1550]: time="2025-01-17T12:10:13.960491909Z" level=info msg="TearDown network for sandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\" successfully" Jan 17 12:10:13.961868 containerd[1550]: time="2025-01-17T12:10:13.961852764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:10:13.961909 containerd[1550]: time="2025-01-17T12:10:13.961888945Z" level=info msg="RemovePodSandbox \"6d38c7f33ae93d13448c189d53c0a3d2504adb9538ecfcb153295f6165e426f1\" returns successfully" Jan 17 12:10:13.962397 containerd[1550]: time="2025-01-17T12:10:13.962217436Z" level=info msg="StopPodSandbox for \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\"" Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.982 [WARNING][5474] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w95gg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e85f6860-3f73-4ce2-9930-90bfeb23aed3", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2", Pod:"csi-node-driver-w95gg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9e7073a450", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.983 [INFO][5474] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.983 [INFO][5474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" iface="eth0" netns="" Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.983 [INFO][5474] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.983 [INFO][5474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.995 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" HandleID="k8s-pod-network.674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.995 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.996 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.999 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" HandleID="k8s-pod-network.674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:13.999 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" HandleID="k8s-pod-network.674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:14.000 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:14.001972 containerd[1550]: 2025-01-17 12:10:14.001 [INFO][5474] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:14.005994 containerd[1550]: time="2025-01-17T12:10:14.002294565Z" level=info msg="TearDown network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\" successfully" Jan 17 12:10:14.005994 containerd[1550]: time="2025-01-17T12:10:14.002310586Z" level=info msg="StopPodSandbox for \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\" returns successfully" Jan 17 12:10:14.005994 containerd[1550]: time="2025-01-17T12:10:14.002639964Z" level=info msg="RemovePodSandbox for \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\"" Jan 17 12:10:14.005994 containerd[1550]: time="2025-01-17T12:10:14.002655652Z" level=info msg="Forcibly stopping sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\"" Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.027 [WARNING][5498] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w95gg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e85f6860-3f73-4ce2-9930-90bfeb23aed3", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2", Pod:"csi-node-driver-w95gg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9e7073a450", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.027 [INFO][5498] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.027 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" iface="eth0" netns="" Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.027 [INFO][5498] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.027 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.042 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" HandleID="k8s-pod-network.674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.042 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.042 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.045 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" HandleID="k8s-pod-network.674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.045 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" HandleID="k8s-pod-network.674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Workload="localhost-k8s-csi--node--driver--w95gg-eth0" Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.046 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:10:14.048465 containerd[1550]: 2025-01-17 12:10:14.047 [INFO][5498] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76" Jan 17 12:10:14.048745 containerd[1550]: time="2025-01-17T12:10:14.048483833Z" level=info msg="TearDown network for sandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\" successfully" Jan 17 12:10:14.049701 containerd[1550]: time="2025-01-17T12:10:14.049685691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:10:14.049787 containerd[1550]: time="2025-01-17T12:10:14.049715803Z" level=info msg="RemovePodSandbox \"674855c6b6ab34387ad19a7d55763658b86248005fb00d4fd7a34ad9d48b4d76\" returns successfully" Jan 17 12:10:14.313930 containerd[1550]: time="2025-01-17T12:10:14.313896295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:14.315621 containerd[1550]: time="2025-01-17T12:10:14.314700536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:10:14.315621 containerd[1550]: time="2025-01-17T12:10:14.315036032Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:14.317549 containerd[1550]: time="2025-01-17T12:10:14.317526971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:14.318295 containerd[1550]: time="2025-01-17T12:10:14.318265063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.112618871s" Jan 17 12:10:14.318332 containerd[1550]: time="2025-01-17T12:10:14.318307130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:10:14.325819 containerd[1550]: time="2025-01-17T12:10:14.325790013Z" level=info msg="CreateContainer within sandbox \"d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:10:14.336596 containerd[1550]: time="2025-01-17T12:10:14.336545791Z" level=info msg="CreateContainer within sandbox \"d7e5075efcbab9f607137273390b167e282cc5dbf158c5de1fc3353556f84ba2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b344a24971565eb65b6365f51ddaf6596261bec33a19ddf5764dd56ed2dd3a1c\"" Jan 17 12:10:14.337945 containerd[1550]: time="2025-01-17T12:10:14.337394912Z" level=info msg="StartContainer for \"b344a24971565eb65b6365f51ddaf6596261bec33a19ddf5764dd56ed2dd3a1c\"" Jan 17 12:10:14.363699 systemd[1]: run-containerd-runc-k8s.io-b344a24971565eb65b6365f51ddaf6596261bec33a19ddf5764dd56ed2dd3a1c-runc.mqbakN.mount: Deactivated successfully. Jan 17 12:10:14.370383 systemd[1]: Started cri-containerd-b344a24971565eb65b6365f51ddaf6596261bec33a19ddf5764dd56ed2dd3a1c.scope - libcontainer container b344a24971565eb65b6365f51ddaf6596261bec33a19ddf5764dd56ed2dd3a1c. Jan 17 12:10:14.386872 containerd[1550]: time="2025-01-17T12:10:14.386852452Z" level=info msg="StartContainer for \"b344a24971565eb65b6365f51ddaf6596261bec33a19ddf5764dd56ed2dd3a1c\" returns successfully" Jan 17 12:10:14.615051 kubelet[2790]: I0117 12:10:14.614958 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w95gg" podStartSLOduration=25.883346732 podStartE2EDuration="38.614942447s" podCreationTimestamp="2025-01-17 12:09:36 +0000 UTC" firstStartedPulling="2025-01-17 12:10:01.587105457 +0000 UTC m=+48.478020362" lastFinishedPulling="2025-01-17 12:10:14.318701171 +0000 UTC m=+61.209616077" observedRunningTime="2025-01-17 12:10:14.61477941 +0000 UTC m=+61.505694323" watchObservedRunningTime="2025-01-17 12:10:14.614942447 +0000 UTC m=+61.505857358" Jan 17 12:10:15.138154 kubelet[2790]: I0117 12:10:15.137965 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:10:15.452215 kubelet[2790]: I0117 12:10:15.452077 2790 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:10:15.459228 kubelet[2790]: I0117 12:10:15.459180 2790 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:10:26.292913 systemd[1]: Started sshd@7-139.178.70.108:22-147.75.109.163:42830.service - OpenSSH per-connection server daemon (147.75.109.163:42830). Jan 17 12:10:26.425523 sshd[5586]: Accepted publickey for core from 147.75.109.163 port 42830 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:10:26.427465 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:26.430885 systemd-logind[1532]: New session 10 of user core. Jan 17 12:10:26.441370 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:10:26.947181 sshd[5586]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:26.949454 systemd[1]: sshd@7-139.178.70.108:22-147.75.109.163:42830.service: Deactivated successfully. Jan 17 12:10:26.951458 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:10:26.952659 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:10:26.953506 systemd-logind[1532]: Removed session 10. Jan 17 12:10:31.958142 systemd[1]: Started sshd@8-139.178.70.108:22-147.75.109.163:55210.service - OpenSSH per-connection server daemon (147.75.109.163:55210). Jan 17 12:10:32.017238 sshd[5605]: Accepted publickey for core from 147.75.109.163 port 55210 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:10:32.018314 sshd[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:32.021157 systemd-logind[1532]: New session 11 of user core. Jan 17 12:10:32.026379 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:10:32.164819 sshd[5605]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:32.166688 systemd[1]: sshd@8-139.178.70.108:22-147.75.109.163:55210.service: Deactivated successfully. Jan 17 12:10:32.167920 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:10:32.169103 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:10:32.169716 systemd-logind[1532]: Removed session 11. Jan 17 12:10:37.173633 systemd[1]: Started sshd@9-139.178.70.108:22-147.75.109.163:55224.service - OpenSSH per-connection server daemon (147.75.109.163:55224). Jan 17 12:10:37.280577 sshd[5619]: Accepted publickey for core from 147.75.109.163 port 55224 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:10:37.281443 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:37.284181 systemd-logind[1532]: New session 12 of user core. Jan 17 12:10:37.289386 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:10:37.429519 sshd[5619]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:37.434850 systemd[1]: sshd@9-139.178.70.108:22-147.75.109.163:55224.service: Deactivated successfully. Jan 17 12:10:37.435950 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:10:37.436846 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:10:37.441761 systemd[1]: Started sshd@10-139.178.70.108:22-147.75.109.163:51418.service - OpenSSH per-connection server daemon (147.75.109.163:51418). Jan 17 12:10:37.442999 systemd-logind[1532]: Removed session 12. Jan 17 12:10:37.615802 sshd[5632]: Accepted publickey for core from 147.75.109.163 port 51418 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:10:37.616872 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:37.619710 systemd-logind[1532]: New session 13 of user core. Jan 17 12:10:37.625378 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:10:37.858260 sshd[5632]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:37.868828 systemd[1]: sshd@10-139.178.70.108:22-147.75.109.163:51418.service: Deactivated successfully. Jan 17 12:10:37.870914 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:10:37.872272 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:10:37.880657 systemd[1]: Started sshd@11-139.178.70.108:22-147.75.109.163:51426.service - OpenSSH per-connection server daemon (147.75.109.163:51426). Jan 17 12:10:37.884273 systemd-logind[1532]: Removed session 13. Jan 17 12:10:38.011339 sshd[5643]: Accepted publickey for core from 147.75.109.163 port 51426 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:10:38.012586 sshd[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:38.016358 systemd-logind[1532]: New session 14 of user core. Jan 17 12:10:38.021382 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:10:38.108244 sshd[5643]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:38.110575 systemd[1]: sshd@11-139.178.70.108:22-147.75.109.163:51426.service: Deactivated successfully. Jan 17 12:10:38.111887 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:10:38.112722 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:10:38.113243 systemd-logind[1532]: Removed session 14. Jan 17 12:10:42.630139 kubelet[2790]: I0117 12:10:42.528120 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:10:43.118059 systemd[1]: Started sshd@12-139.178.70.108:22-147.75.109.163:51430.service - OpenSSH per-connection server daemon (147.75.109.163:51430). Jan 17 12:10:43.480905 sshd[5682]: Accepted publickey for core from 147.75.109.163 port 51430 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:10:43.484371 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:43.487662 systemd-logind[1532]: New session 15 of user core. Jan 17 12:10:43.495373 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:10:43.598369 sshd[5682]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:43.600865 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:10:43.600991 systemd[1]: sshd@12-139.178.70.108:22-147.75.109.163:51430.service: Deactivated successfully. Jan 17 12:10:43.603605 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:10:43.605830 systemd-logind[1532]: Removed session 15. Jan 17 12:10:47.371072 systemd[1]: run-containerd-runc-k8s.io-6f4a7d40f648a73b83c991e28c04901cf5332790ae9e3fd7c0ffa17231c468d4-runc.9MDjGX.mount: Deactivated successfully. Jan 17 12:10:48.609187 systemd[1]: Started sshd@13-139.178.70.108:22-147.75.109.163:33344.service - OpenSSH per-connection server daemon (147.75.109.163:33344). Jan 17 12:10:49.213378 sshd[5744]: Accepted publickey for core from 147.75.109.163 port 33344 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:10:49.215233 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:49.221417 systemd-logind[1532]: New session 16 of user core. Jan 17 12:10:49.225465 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:10:49.452786 sshd[5744]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:49.454833 systemd[1]: sshd@13-139.178.70.108:22-147.75.109.163:33344.service: Deactivated successfully. Jan 17 12:10:49.455997 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:10:49.457126 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:10:49.457855 systemd-logind[1532]: Removed session 16. Jan 17 12:10:54.460055 systemd[1]: Started sshd@14-139.178.70.108:22-147.75.109.163:33354.service - OpenSSH per-connection server daemon (147.75.109.163:33354). Jan 17 12:10:54.671807 sshd[5761]: Accepted publickey for core from 147.75.109.163 port 33354 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:10:54.691782 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:54.695241 systemd-logind[1532]: New session 17 of user core. Jan 17 12:10:54.701477 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:10:54.928480 sshd[5761]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:54.941250 systemd[1]: sshd@14-139.178.70.108:22-147.75.109.163:33354.service: Deactivated successfully. Jan 17 12:10:54.942376 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:10:54.942792 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:10:54.943441 systemd-logind[1532]: Removed session 17. Jan 17 12:10:59.938020 systemd[1]: Started sshd@15-139.178.70.108:22-147.75.109.163:46956.service - OpenSSH per-connection server daemon (147.75.109.163:46956). Jan 17 12:11:00.010138 sshd[5794]: Accepted publickey for core from 147.75.109.163 port 46956 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:11:00.011322 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:00.015044 systemd-logind[1532]: New session 18 of user core. Jan 17 12:11:00.018373 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:11:00.155313 sshd[5794]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:00.161960 systemd[1]: sshd@15-139.178.70.108:22-147.75.109.163:46956.service: Deactivated successfully. Jan 17 12:11:00.163315 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:11:00.164805 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:11:00.168595 systemd[1]: Started sshd@16-139.178.70.108:22-147.75.109.163:46958.service - OpenSSH per-connection server daemon (147.75.109.163:46958). Jan 17 12:11:00.169869 systemd-logind[1532]: Removed session 18. Jan 17 12:11:00.199100 sshd[5807]: Accepted publickey for core from 147.75.109.163 port 46958 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:11:00.200019 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:00.202412 systemd-logind[1532]: New session 19 of user core. Jan 17 12:11:00.210388 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:11:00.850424 sshd[5807]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:00.867513 systemd[1]: Started sshd@17-139.178.70.108:22-147.75.109.163:46974.service - OpenSSH per-connection server daemon (147.75.109.163:46974). Jan 17 12:11:00.867831 systemd[1]: sshd@16-139.178.70.108:22-147.75.109.163:46958.service: Deactivated successfully. Jan 17 12:11:00.869585 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:11:00.870910 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:11:00.872227 systemd-logind[1532]: Removed session 19. Jan 17 12:11:00.983808 sshd[5816]: Accepted publickey for core from 147.75.109.163 port 46974 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:11:00.991156 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:00.995362 systemd-logind[1532]: New session 20 of user core. Jan 17 12:11:00.999390 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:11:02.548604 sshd[5816]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:02.549073 systemd[1]: Started sshd@18-139.178.70.108:22-147.75.109.163:46988.service - OpenSSH per-connection server daemon (147.75.109.163:46988). Jan 17 12:11:02.563020 systemd[1]: sshd@17-139.178.70.108:22-147.75.109.163:46974.service: Deactivated successfully. Jan 17 12:11:02.564556 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:11:02.572903 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:11:02.573908 systemd-logind[1532]: Removed session 20. Jan 17 12:11:02.661832 sshd[5840]: Accepted publickey for core from 147.75.109.163 port 46988 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:11:02.665846 sshd[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:02.669008 systemd-logind[1532]: New session 21 of user core. Jan 17 12:11:02.676533 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:11:03.380658 sshd[5840]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:03.388391 systemd[1]: sshd@18-139.178.70.108:22-147.75.109.163:46988.service: Deactivated successfully. Jan 17 12:11:03.389591 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:11:03.390747 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:11:03.400155 systemd[1]: Started sshd@19-139.178.70.108:22-147.75.109.163:47004.service - OpenSSH per-connection server daemon (147.75.109.163:47004). Jan 17 12:11:03.403718 systemd-logind[1532]: Removed session 21. Jan 17 12:11:03.435274 sshd[5855]: Accepted publickey for core from 147.75.109.163 port 47004 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:11:03.436482 sshd[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:03.439445 systemd-logind[1532]: New session 22 of user core. Jan 17 12:11:03.446440 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:11:03.583198 sshd[5855]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:03.585389 systemd[1]: sshd@19-139.178.70.108:22-147.75.109.163:47004.service: Deactivated successfully. Jan 17 12:11:03.586607 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:11:03.586996 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:11:03.587549 systemd-logind[1532]: Removed session 22. Jan 17 12:11:08.593914 systemd[1]: Started sshd@20-139.178.70.108:22-147.75.109.163:36712.service - OpenSSH per-connection server daemon (147.75.109.163:36712). Jan 17 12:11:08.644079 sshd[5872]: Accepted publickey for core from 147.75.109.163 port 36712 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:11:08.644975 sshd[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:08.647721 systemd-logind[1532]: New session 23 of user core. Jan 17 12:11:08.652418 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:11:08.738435 sshd[5872]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:08.739974 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:11:08.740842 systemd[1]: sshd@20-139.178.70.108:22-147.75.109.163:36712.service: Deactivated successfully. Jan 17 12:11:08.741960 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:11:08.742776 systemd-logind[1532]: Removed session 23. Jan 17 12:11:13.748420 systemd[1]: Started sshd@21-139.178.70.108:22-147.75.109.163:36716.service - OpenSSH per-connection server daemon (147.75.109.163:36716). Jan 17 12:11:13.971247 sshd[5905]: Accepted publickey for core from 147.75.109.163 port 36716 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:11:13.973409 sshd[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:13.977771 systemd-logind[1532]: New session 24 of user core. Jan 17 12:11:13.980450 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:11:14.148440 sshd[5905]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:14.150571 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:11:14.150917 systemd[1]: sshd@21-139.178.70.108:22-147.75.109.163:36716.service: Deactivated successfully. Jan 17 12:11:14.152279 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:11:14.153745 systemd-logind[1532]: Removed session 24. Jan 17 12:11:19.155995 systemd[1]: Started sshd@22-139.178.70.108:22-147.75.109.163:44296.service - OpenSSH per-connection server daemon (147.75.109.163:44296). Jan 17 12:11:19.378134 sshd[5944]: Accepted publickey for core from 147.75.109.163 port 44296 ssh2: RSA SHA256:d86Zfld7pfipwDMCy9Zh9gJz3C7zt8CsQJU6anwQyxc Jan 17 12:11:19.378940 sshd[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:19.382240 systemd-logind[1532]: New session 25 of user core. Jan 17 12:11:19.387401 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:11:19.847524 sshd[5944]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:19.851409 systemd[1]: sshd@22-139.178.70.108:22-147.75.109.163:44296.service: Deactivated successfully. Jan 17 12:11:19.852555 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:11:19.853647 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:11:19.854319 systemd-logind[1532]: Removed session 25.