Dec 13 01:54:01.734209 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:54:01.734225 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:54:01.734232 kernel: Disabled fast string operations Dec 13 01:54:01.734236 kernel: BIOS-provided physical RAM map: Dec 13 01:54:01.734240 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Dec 13 01:54:01.734244 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Dec 13 01:54:01.734250 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Dec 13 01:54:01.734254 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Dec 13 01:54:01.734258 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Dec 13 01:54:01.734263 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Dec 13 01:54:01.734267 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Dec 13 01:54:01.734271 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Dec 13 01:54:01.734275 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Dec 13 01:54:01.734279 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 01:54:01.734286 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Dec 13 01:54:01.734291 kernel: NX (Execute Disable) protection: active Dec 13 01:54:01.734295 kernel: APIC: Static calls initialized Dec 13 01:54:01.734300 kernel: SMBIOS 2.7 present. Dec 13 01:54:01.734305 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Dec 13 01:54:01.734310 kernel: vmware: hypercall mode: 0x00 Dec 13 01:54:01.734315 kernel: Hypervisor detected: VMware Dec 13 01:54:01.734319 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Dec 13 01:54:01.734325 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Dec 13 01:54:01.734337 kernel: vmware: using clock offset of 2525555316 ns Dec 13 01:54:01.734342 kernel: tsc: Detected 3408.000 MHz processor Dec 13 01:54:01.734347 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:54:01.734352 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:54:01.734357 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Dec 13 01:54:01.734362 kernel: total RAM covered: 3072M Dec 13 01:54:01.734367 kernel: Found optimal setting for mtrr clean up Dec 13 01:54:01.734372 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Dec 13 01:54:01.734379 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Dec 13 01:54:01.734383 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:54:01.734388 kernel: Using GB pages for direct mapping Dec 13 01:54:01.734393 kernel: ACPI: Early table checksum verification disabled Dec 13 01:54:01.734398 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Dec 13 01:54:01.734403 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Dec 13 01:54:01.734408 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Dec 13 01:54:01.734412 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Dec 13 01:54:01.734417 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:54:01.734425 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:54:01.734430 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Dec 13 01:54:01.734435 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Dec 13 01:54:01.734440 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Dec 13 01:54:01.734446 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Dec 13 01:54:01.734452 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Dec 13 01:54:01.734457 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Dec 13 01:54:01.734462 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Dec 13 01:54:01.734467 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Dec 13 01:54:01.734472 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:54:01.734477 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:54:01.734483 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Dec 13 01:54:01.734488 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Dec 13 01:54:01.734493 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Dec 13 01:54:01.734498 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Dec 13 01:54:01.734504 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Dec 13 01:54:01.734509 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Dec 13 01:54:01.734514 kernel: system APIC only can use physical flat Dec 13 01:54:01.734519 kernel: APIC: Switched APIC routing to: physical flat Dec 13 01:54:01.734524 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:54:01.734529 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 01:54:01.734534 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 01:54:01.734539 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 01:54:01.734544 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 01:54:01.734550 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 01:54:01.734556 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 01:54:01.734561 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 01:54:01.734566 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Dec 13 01:54:01.734571 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Dec 13 01:54:01.734576 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Dec 13 01:54:01.734581 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Dec 13 01:54:01.734586 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Dec 13 01:54:01.734591 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Dec 13 01:54:01.734596 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Dec 13 01:54:01.734601 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Dec 13 01:54:01.734607 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Dec 13 01:54:01.734612 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Dec 13 01:54:01.734617 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Dec 13 01:54:01.734622 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Dec 13 01:54:01.734627 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Dec 13 01:54:01.734632 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Dec 13 01:54:01.734637 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Dec 13 01:54:01.734642 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Dec 13 01:54:01.734647 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Dec 13 01:54:01.734652 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Dec 13 01:54:01.734658 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Dec 13 01:54:01.734663 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Dec 13 01:54:01.734668 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Dec 13 01:54:01.734673 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Dec 13 01:54:01.734678 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Dec 13 01:54:01.734683 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Dec 13 01:54:01.734688 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Dec 13 01:54:01.734693 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Dec 13 01:54:01.734698 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Dec 13 01:54:01.734703 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Dec 13 01:54:01.734709 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Dec 13 01:54:01.734715 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Dec 13 01:54:01.734720 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Dec 13 01:54:01.734725 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Dec 13 01:54:01.734730 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Dec 13 01:54:01.734735 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Dec 13 01:54:01.734740 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Dec 13 01:54:01.734745 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Dec 13 01:54:01.734750 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Dec 13 01:54:01.734755 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Dec 13 01:54:01.734761 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Dec 13 01:54:01.734766 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Dec 13 01:54:01.734771 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Dec 13 01:54:01.734776 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Dec 13 01:54:01.734781 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Dec 13 01:54:01.734786 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Dec 13 01:54:01.734791 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Dec 13 01:54:01.734796 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Dec 13 01:54:01.734801 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Dec 13 01:54:01.734806 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Dec 13 01:54:01.734812 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Dec 13 01:54:01.734817 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Dec 13 01:54:01.734822 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Dec 13 01:54:01.734831 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Dec 13 01:54:01.734837 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Dec 13 01:54:01.734843 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Dec 13 01:54:01.734848 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Dec 13 01:54:01.734853 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Dec 13 01:54:01.734859 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Dec 13 01:54:01.734865 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Dec 13 01:54:01.734870 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Dec 13 01:54:01.734876 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Dec 13 01:54:01.734881 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Dec 13 01:54:01.734886 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Dec 13 01:54:01.734892 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Dec 13 01:54:01.734897 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Dec 13 01:54:01.734903 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Dec 13 01:54:01.734908 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Dec 13 01:54:01.734913 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Dec 13 01:54:01.734920 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Dec 13 01:54:01.734925 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Dec 13 01:54:01.734931 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Dec 13 01:54:01.734936 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Dec 13 01:54:01.734941 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Dec 13 01:54:01.734947 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Dec 13 01:54:01.734952 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Dec 13 01:54:01.734957 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Dec 13 01:54:01.734963 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Dec 13 01:54:01.734968 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Dec 13 01:54:01.734975 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Dec 13 01:54:01.734980 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Dec 13 01:54:01.734985 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Dec 13 01:54:01.734991 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Dec 13 01:54:01.734996 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Dec 13 01:54:01.735001 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Dec 13 01:54:01.735007 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Dec 13 01:54:01.735012 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Dec 13 01:54:01.735017 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Dec 13 01:54:01.735022 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Dec 13 01:54:01.735028 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Dec 13 01:54:01.735034 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Dec 13 01:54:01.735040 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Dec 13 01:54:01.735045 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Dec 13 01:54:01.735050 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Dec 13 01:54:01.735056 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Dec 13 01:54:01.735061 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Dec 13 01:54:01.735066 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Dec 13 01:54:01.735072 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Dec 13 01:54:01.735077 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Dec 13 01:54:01.735082 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Dec 13 01:54:01.735089 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Dec 13 01:54:01.735094 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Dec 13 01:54:01.735099 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Dec 13 01:54:01.735105 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Dec 13 01:54:01.735110 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Dec 13 01:54:01.735115 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Dec 13 01:54:01.735121 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Dec 13 01:54:01.735126 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Dec 13 01:54:01.735132 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Dec 13 01:54:01.735137 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Dec 13 01:54:01.735143 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Dec 13 01:54:01.735149 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Dec 13 01:54:01.735154 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Dec 13 01:54:01.735159 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Dec 13 01:54:01.735398 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Dec 13 01:54:01.735405 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Dec 13 01:54:01.735410 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Dec 13 01:54:01.735415 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Dec 13 01:54:01.735421 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Dec 13 01:54:01.735426 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Dec 13 01:54:01.735434 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Dec 13 01:54:01.735439 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Dec 13 01:54:01.735445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:54:01.735450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 01:54:01.735456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Dec 13 01:54:01.735461 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Dec 13 01:54:01.735467 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Dec 13 01:54:01.735472 kernel: Zone ranges: Dec 13 01:54:01.735478 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:54:01.735483 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Dec 13 01:54:01.735490 kernel: Normal empty Dec 13 01:54:01.735495 kernel: Movable zone start for each node Dec 13 01:54:01.735501 kernel: Early memory node ranges Dec 13 01:54:01.735506 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Dec 13 01:54:01.735512 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Dec 13 01:54:01.735517 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Dec 13 01:54:01.735522 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Dec 13 01:54:01.735528 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:54:01.735533 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Dec 13 01:54:01.735540 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Dec 13 01:54:01.735546 kernel: ACPI: PM-Timer IO Port: 0x1008 Dec 13 01:54:01.735551 kernel: system APIC only can use physical flat Dec 13 01:54:01.735556 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Dec 13 01:54:01.735562 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 01:54:01.735568 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 01:54:01.735573 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 01:54:01.735578 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 01:54:01.735584 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 01:54:01.735589 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 01:54:01.735596 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 01:54:01.735601 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 01:54:01.735607 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 01:54:01.735612 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 01:54:01.735618 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 01:54:01.735623 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 01:54:01.735628 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 01:54:01.735634 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 01:54:01.735639 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 01:54:01.735645 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 01:54:01.735651 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Dec 13 01:54:01.735656 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Dec 13 01:54:01.735662 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Dec 13 01:54:01.735667 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Dec 13 01:54:01.735672 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Dec 13 01:54:01.735678 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Dec 13 01:54:01.735683 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Dec 13 01:54:01.735689 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Dec 13 01:54:01.735694 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Dec 13 01:54:01.735701 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Dec 13 01:54:01.735706 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Dec 13 01:54:01.735711 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Dec 13 01:54:01.735717 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Dec 13 01:54:01.735722 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Dec 13 01:54:01.735728 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Dec 13 01:54:01.735733 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Dec 13 01:54:01.735739 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Dec 13 01:54:01.735744 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Dec 13 01:54:01.735749 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Dec 13 01:54:01.735756 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Dec 13 01:54:01.735761 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Dec 13 01:54:01.735766 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Dec 13 01:54:01.735772 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Dec 13 01:54:01.735777 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Dec 13 01:54:01.735783 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Dec 13 01:54:01.735788 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Dec 13 01:54:01.735793 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Dec 13 01:54:01.735799 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Dec 13 01:54:01.735805 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Dec 13 01:54:01.735811 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Dec 13 01:54:01.735819 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Dec 13 01:54:01.735825 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Dec 13 01:54:01.735830 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Dec 13 01:54:01.735836 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Dec 13 01:54:01.735841 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Dec 13 01:54:01.735846 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Dec 13 01:54:01.735852 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Dec 13 01:54:01.735857 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Dec 13 01:54:01.735864 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Dec 13 01:54:01.735869 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Dec 13 01:54:01.735875 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Dec 13 01:54:01.735880 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Dec 13 01:54:01.735885 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Dec 13 01:54:01.735891 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Dec 13 01:54:01.735896 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Dec 13 01:54:01.735902 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Dec 13 01:54:01.735907 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Dec 13 01:54:01.735912 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Dec 13 01:54:01.735919 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Dec 13 01:54:01.735924 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Dec 13 01:54:01.735930 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Dec 13 01:54:01.735935 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Dec 13 01:54:01.735940 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Dec 13 01:54:01.735946 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Dec 13 01:54:01.735951 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Dec 13 01:54:01.735957 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Dec 13 01:54:01.735962 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Dec 13 01:54:01.735968 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Dec 13 01:54:01.735974 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Dec 13 01:54:01.735979 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Dec 13 01:54:01.735985 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Dec 13 01:54:01.735990 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Dec 13 01:54:01.735996 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Dec 13 01:54:01.736001 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Dec 13 01:54:01.736006 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Dec 13 01:54:01.736012 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Dec 13 01:54:01.736017 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Dec 13 01:54:01.740352 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Dec 13 01:54:01.740362 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Dec 13 01:54:01.740368 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Dec 13 01:54:01.740374 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Dec 13 01:54:01.740379 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Dec 13 01:54:01.740385 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Dec 13 01:54:01.740390 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Dec 13 01:54:01.740396 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Dec 13 01:54:01.740401 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Dec 13 01:54:01.740407 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Dec 13 01:54:01.740414 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Dec 13 01:54:01.740420 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Dec 13 01:54:01.740425 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Dec 13 01:54:01.740431 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Dec 13 01:54:01.740436 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Dec 13 01:54:01.740442 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Dec 13 01:54:01.740447 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Dec 13 01:54:01.740453 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Dec 13 01:54:01.740458 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Dec 13 01:54:01.740464 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Dec 13 01:54:01.740471 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Dec 13 01:54:01.740476 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Dec 13 01:54:01.740482 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Dec 13 01:54:01.740487 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Dec 13 01:54:01.740492 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Dec 13 01:54:01.740498 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Dec 13 01:54:01.740504 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Dec 13 01:54:01.740509 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Dec 13 01:54:01.740515 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Dec 13 01:54:01.740521 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Dec 13 01:54:01.740527 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Dec 13 01:54:01.740532 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Dec 13 01:54:01.740538 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Dec 13 01:54:01.740543 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Dec 13 01:54:01.740549 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Dec 13 01:54:01.740554 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Dec 13 01:54:01.740560 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Dec 13 01:54:01.740565 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Dec 13 01:54:01.740571 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Dec 13 01:54:01.740578 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Dec 13 01:54:01.740584 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Dec 13 01:54:01.740589 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Dec 13 01:54:01.740595 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Dec 13 01:54:01.740600 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Dec 13 01:54:01.740606 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:54:01.740611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Dec 13 01:54:01.740617 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:54:01.740623 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Dec 13 01:54:01.740629 kernel: TSC deadline timer available Dec 13 01:54:01.740635 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Dec 13 01:54:01.740641 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Dec 13 01:54:01.740646 kernel: Booting paravirtualized kernel on VMware hypervisor Dec 13 01:54:01.740652 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:54:01.740658 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Dec 13 01:54:01.740664 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 01:54:01.740669 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 01:54:01.740675 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Dec 13 01:54:01.740682 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Dec 13 01:54:01.740687 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Dec 13 01:54:01.740693 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Dec 13 01:54:01.740698 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Dec 13 01:54:01.740711 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Dec 13 01:54:01.740718 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Dec 13 01:54:01.740724 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Dec 13 01:54:01.740731 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Dec 13 01:54:01.740737 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Dec 13 01:54:01.740744 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Dec 13 01:54:01.740749 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Dec 13 01:54:01.740755 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Dec 13 01:54:01.740761 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Dec 13 01:54:01.740767 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Dec 13 01:54:01.740772 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Dec 13 01:54:01.740779 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:54:01.740785 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:54:01.740793 kernel: random: crng init done Dec 13 01:54:01.740798 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 01:54:01.740804 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Dec 13 01:54:01.740810 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 01:54:01.740816 kernel: printk: log_buf_len: 1048576 bytes Dec 13 01:54:01.740822 kernel: printk: early log buf free: 239648(91%) Dec 13 01:54:01.740828 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:54:01.740834 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:54:01.740840 kernel: Fallback order for Node 0: 0 Dec 13 01:54:01.740847 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Dec 13 01:54:01.740853 kernel: Policy zone: DMA32 Dec 13 01:54:01.740859 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:54:01.740865 kernel: Memory: 1936372K/2096628K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 159996K reserved, 0K cma-reserved) Dec 13 01:54:01.740872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Dec 13 01:54:01.740879 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:54:01.740885 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:54:01.740891 kernel: Dynamic Preempt: voluntary Dec 13 01:54:01.740897 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:54:01.740903 kernel: rcu: RCU event tracing is enabled. Dec 13 01:54:01.740909 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Dec 13 01:54:01.740915 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:54:01.740921 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:54:01.740927 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:54:01.740933 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:54:01.740940 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Dec 13 01:54:01.740946 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Dec 13 01:54:01.740952 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Dec 13 01:54:01.740958 kernel: Console: colour VGA+ 80x25 Dec 13 01:54:01.740964 kernel: printk: console [tty0] enabled Dec 13 01:54:01.740970 kernel: printk: console [ttyS0] enabled Dec 13 01:54:01.740976 kernel: ACPI: Core revision 20230628 Dec 13 01:54:01.740982 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Dec 13 01:54:01.740988 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:54:01.740995 kernel: x2apic enabled Dec 13 01:54:01.741001 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:54:01.741007 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:54:01.741013 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:54:01.741019 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Dec 13 01:54:01.741025 kernel: Disabled fast string operations Dec 13 01:54:01.741031 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:54:01.741037 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:54:01.741043 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:54:01.741050 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:54:01.741056 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:54:01.741061 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 01:54:01.741067 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:54:01.741073 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 01:54:01.741080 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 01:54:01.741085 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:54:01.741091 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:54:01.741097 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:54:01.741105 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 01:54:01.741111 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:54:01.741116 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:54:01.741122 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:54:01.741128 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:54:01.741134 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:54:01.741141 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:54:01.741148 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:54:01.741153 kernel: pid_max: default: 131072 minimum: 1024 Dec 13 01:54:01.741161 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:54:01.741167 kernel: landlock: Up and running. Dec 13 01:54:01.741173 kernel: SELinux: Initializing. Dec 13 01:54:01.741179 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:54:01.741185 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:54:01.741191 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 01:54:01.741197 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:54:01.741203 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:54:01.741210 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:54:01.741216 kernel: Performance Events: Skylake events, core PMU driver. Dec 13 01:54:01.741222 kernel: core: CPUID marked event: 'cpu cycles' unavailable Dec 13 01:54:01.741228 kernel: core: CPUID marked event: 'instructions' unavailable Dec 13 01:54:01.741234 kernel: core: CPUID marked event: 'bus cycles' unavailable Dec 13 01:54:01.741240 kernel: core: CPUID marked event: 'cache references' unavailable Dec 13 01:54:01.741246 kernel: core: CPUID marked event: 'cache misses' unavailable Dec 13 01:54:01.741251 kernel: core: CPUID marked event: 'branch instructions' unavailable Dec 13 01:54:01.741257 kernel: core: CPUID marked event: 'branch misses' unavailable Dec 13 01:54:01.741264 kernel: ... version: 1 Dec 13 01:54:01.741270 kernel: ... bit width: 48 Dec 13 01:54:01.741276 kernel: ... generic registers: 4 Dec 13 01:54:01.741282 kernel: ... value mask: 0000ffffffffffff Dec 13 01:54:01.741287 kernel: ... max period: 000000007fffffff Dec 13 01:54:01.741293 kernel: ... fixed-purpose events: 0 Dec 13 01:54:01.741299 kernel: ... event mask: 000000000000000f Dec 13 01:54:01.741305 kernel: signal: max sigframe size: 1776 Dec 13 01:54:01.741311 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:54:01.741318 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:54:01.741324 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:54:01.741367 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:54:01.741374 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:54:01.741380 kernel: .... node #0, CPUs: #1 Dec 13 01:54:01.741386 kernel: Disabled fast string operations Dec 13 01:54:01.741391 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Dec 13 01:54:01.741397 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 01:54:01.741403 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:54:01.741409 kernel: smpboot: Max logical packages: 128 Dec 13 01:54:01.741417 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Dec 13 01:54:01.741423 kernel: devtmpfs: initialized Dec 13 01:54:01.741429 kernel: x86/mm: Memory block size: 128MB Dec 13 01:54:01.741435 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Dec 13 01:54:01.741441 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:54:01.741447 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 01:54:01.741453 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:54:01.741459 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:54:01.741465 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:54:01.741472 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:54:01.741478 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:54:01.741484 kernel: audit: type=2000 audit(1734054840.081:1): state=initialized audit_enabled=0 res=1 Dec 13 01:54:01.741489 kernel: cpuidle: using governor menu Dec 13 01:54:01.741496 kernel: Simple Boot Flag at 0x36 set to 0x80 Dec 13 01:54:01.741502 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:54:01.741508 kernel: dca service started, version 1.12.1 Dec 13 01:54:01.741514 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Dec 13 01:54:01.741520 kernel: PCI: Using configuration type 1 for base access Dec 13 01:54:01.741527 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:54:01.741533 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:54:01.741540 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:54:01.741546 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:54:01.741552 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:54:01.741558 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:54:01.741563 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:54:01.741569 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:54:01.741575 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:54:01.741582 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:54:01.741588 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 01:54:01.741594 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:54:01.741600 kernel: ACPI: Interpreter enabled Dec 13 01:54:01.741606 kernel: ACPI: PM: (supports S0 S1 S5) Dec 13 01:54:01.741612 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:54:01.741618 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:54:01.741624 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:54:01.741630 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Dec 13 01:54:01.741637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Dec 13 01:54:01.741720 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:54:01.741776 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Dec 13 01:54:01.741829 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Dec 13 01:54:01.741838 kernel: PCI host bridge to bus 0000:00 Dec 13 01:54:01.741889 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:54:01.741937 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Dec 13 01:54:01.741981 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:54:01.742025 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:54:01.742069 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Dec 13 01:54:01.742112 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Dec 13 01:54:01.742173 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Dec 13 01:54:01.742232 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Dec 13 01:54:01.742290 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Dec 13 01:54:01.742362 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Dec 13 01:54:01.742426 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Dec 13 01:54:01.742476 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 01:54:01.742526 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 01:54:01.742576 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 01:54:01.742628 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 01:54:01.742682 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Dec 13 01:54:01.742732 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Dec 13 01:54:01.742781 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Dec 13 01:54:01.742835 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Dec 13 01:54:01.742900 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Dec 13 01:54:01.742949 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Dec 13 01:54:01.743004 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Dec 13 01:54:01.743054 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Dec 13 01:54:01.743102 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Dec 13 01:54:01.743150 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Dec 13 01:54:01.743198 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Dec 13 01:54:01.743246 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:54:01.743298 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Dec 13 01:54:01.745361 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745422 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745481 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745532 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745585 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745635 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745691 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745740 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745793 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745842 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745895 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745964 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.746036 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.746085 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.746137 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.746186 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.746238 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.746288 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747520 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.747579 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747634 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.747685 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747738 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.747792 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747849 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.747900 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747953 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.748003 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.748056 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.748107 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.748197 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.748251 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.748305 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749551 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.749611 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749662 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.749721 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749771 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.749824 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749873 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.749926 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749976 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750033 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750083 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750137 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750186 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750239 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750289 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750354 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750410 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750462 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750512 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750567 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750616 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750668 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750720 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750774 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750842 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750912 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750961 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.751014 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.751065 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.751117 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.751166 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.751216 kernel: pci_bus 0000:01: extended config space not accessible Dec 13 01:54:01.751268 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:54:01.751318 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 01:54:01.753360 kernel: acpiphp: Slot [32] registered Dec 13 01:54:01.753394 kernel: acpiphp: Slot [33] registered Dec 13 01:54:01.753421 kernel: acpiphp: Slot [34] registered Dec 13 01:54:01.753446 kernel: acpiphp: Slot [35] registered Dec 13 01:54:01.753472 kernel: acpiphp: Slot [36] registered Dec 13 01:54:01.753496 kernel: acpiphp: Slot [37] registered Dec 13 01:54:01.753524 kernel: acpiphp: Slot [38] registered Dec 13 01:54:01.753551 kernel: acpiphp: Slot [39] registered Dec 13 01:54:01.753574 kernel: acpiphp: Slot [40] registered Dec 13 01:54:01.753599 kernel: acpiphp: Slot [41] registered Dec 13 01:54:01.753628 kernel: acpiphp: Slot [42] registered Dec 13 01:54:01.753635 kernel: acpiphp: Slot [43] registered Dec 13 01:54:01.753641 kernel: acpiphp: Slot [44] registered Dec 13 01:54:01.753646 kernel: acpiphp: Slot [45] registered Dec 13 01:54:01.753652 kernel: acpiphp: Slot [46] registered Dec 13 01:54:01.753658 kernel: acpiphp: Slot [47] registered Dec 13 01:54:01.753664 kernel: acpiphp: Slot [48] registered Dec 13 01:54:01.753669 kernel: acpiphp: Slot [49] registered Dec 13 01:54:01.753675 kernel: acpiphp: Slot [50] registered Dec 13 01:54:01.753683 kernel: acpiphp: Slot [51] registered Dec 13 01:54:01.753689 kernel: acpiphp: Slot [52] registered Dec 13 01:54:01.753695 kernel: acpiphp: Slot [53] registered Dec 13 01:54:01.753701 kernel: acpiphp: Slot [54] registered Dec 13 01:54:01.753707 kernel: acpiphp: Slot [55] registered Dec 13 01:54:01.753713 kernel: acpiphp: Slot [56] registered Dec 13 01:54:01.753718 kernel: acpiphp: Slot [57] registered Dec 13 01:54:01.753724 kernel: acpiphp: Slot [58] registered Dec 13 01:54:01.753730 kernel: acpiphp: Slot [59] registered Dec 13 01:54:01.753737 kernel: acpiphp: Slot [60] registered Dec 13 01:54:01.753742 kernel: acpiphp: Slot [61] registered Dec 13 01:54:01.753748 kernel: acpiphp: Slot [62] registered Dec 13 01:54:01.753758 kernel: acpiphp: Slot [63] registered Dec 13 01:54:01.753840 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 13 01:54:01.753909 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:54:01.753958 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:54:01.754006 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:54:01.754054 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Dec 13 01:54:01.754106 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Dec 13 01:54:01.754154 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Dec 13 01:54:01.754203 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Dec 13 01:54:01.754251 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Dec 13 01:54:01.754310 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Dec 13 01:54:01.754534 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Dec 13 01:54:01.754586 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Dec 13 01:54:01.754639 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:54:01.754689 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.754738 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:54:01.754788 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:54:01.754840 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:54:01.754889 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:54:01.754966 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:54:01.757706 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:54:01.757759 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:54:01.757809 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:54:01.757860 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:54:01.757909 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:54:01.757957 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:54:01.758005 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:54:01.758055 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:54:01.758107 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:54:01.758155 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:54:01.758204 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:54:01.758253 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:54:01.758301 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:54:01.758376 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:54:01.758428 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:54:01.758477 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:54:01.758526 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:54:01.758575 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:54:01.758623 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:54:01.758673 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:54:01.758725 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:54:01.758774 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:54:01.758855 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Dec 13 01:54:01.758925 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Dec 13 01:54:01.758975 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Dec 13 01:54:01.759025 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Dec 13 01:54:01.759075 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Dec 13 01:54:01.759124 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:54:01.759178 kernel: pci 0000:0b:00.0: supports D1 D2 Dec 13 01:54:01.759228 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:01.759279 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:54:01.759347 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:54:01.759400 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:54:01.759449 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:54:01.759498 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:54:01.759550 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:54:01.759599 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:54:01.759647 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:54:01.759696 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:54:01.759745 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:54:01.759794 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:54:01.759843 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:54:01.759905 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:54:01.759972 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:54:01.760023 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:54:01.760072 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:54:01.760121 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:54:01.760170 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:54:01.760219 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:54:01.760268 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:54:01.760316 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:54:01.760399 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:54:01.760448 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:54:01.760496 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:54:01.760545 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:54:01.760594 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:54:01.760642 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:54:01.760691 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:54:01.760738 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:54:01.760790 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:54:01.760837 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:54:01.760887 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:54:01.760936 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:54:01.760984 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:54:01.761032 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:54:01.761081 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:54:01.761132 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:54:01.761181 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:54:01.761230 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:54:01.761279 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:54:01.761356 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:54:01.761408 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:54:01.761457 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:54:01.761505 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:54:01.761562 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:54:01.761615 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:54:01.761663 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:54:01.761712 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:54:01.761762 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:54:01.761811 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:54:01.761863 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:54:01.761913 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:54:01.761966 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:54:01.762014 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:54:01.762064 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:54:01.762112 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:54:01.762160 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:54:01.762208 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:54:01.762257 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:54:01.762306 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:54:01.762375 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:54:01.762423 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:54:01.762473 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:54:01.762521 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:54:01.762569 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:54:01.762618 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:54:01.762667 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:54:01.762715 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:54:01.762767 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:54:01.762816 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:54:01.762900 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:54:01.762950 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:54:01.762999 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:54:01.763047 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:54:01.763096 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:54:01.763145 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:54:01.763196 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:54:01.763245 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:54:01.763294 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:54:01.763360 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:54:01.763369 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Dec 13 01:54:01.763375 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Dec 13 01:54:01.763381 kernel: ACPI: PCI: Interrupt link LNKB disabled Dec 13 01:54:01.763387 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:54:01.763395 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Dec 13 01:54:01.763401 kernel: iommu: Default domain type: Translated Dec 13 01:54:01.763407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:54:01.763413 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:54:01.763419 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:54:01.763425 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Dec 13 01:54:01.763431 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Dec 13 01:54:01.763481 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Dec 13 01:54:01.763530 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Dec 13 01:54:01.763581 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:54:01.763590 kernel: vgaarb: loaded Dec 13 01:54:01.763596 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Dec 13 01:54:01.763602 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Dec 13 01:54:01.763608 kernel: clocksource: Switched to clocksource tsc-early Dec 13 01:54:01.763614 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:54:01.763620 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:54:01.763625 kernel: pnp: PnP ACPI init Dec 13 01:54:01.763676 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Dec 13 01:54:01.763724 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Dec 13 01:54:01.763768 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Dec 13 01:54:01.763816 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Dec 13 01:54:01.763864 kernel: pnp 00:06: [dma 2] Dec 13 01:54:01.763912 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Dec 13 01:54:01.763957 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Dec 13 01:54:01.764004 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Dec 13 01:54:01.764012 kernel: pnp: PnP ACPI: found 8 devices Dec 13 01:54:01.764018 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:54:01.764024 kernel: NET: Registered PF_INET protocol family Dec 13 01:54:01.764030 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:54:01.764036 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:54:01.764042 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:54:01.764048 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:54:01.764054 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:54:01.764061 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:54:01.764067 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:54:01.764073 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:54:01.764079 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:54:01.764085 kernel: NET: Registered PF_XDP protocol family Dec 13 01:54:01.764134 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 01:54:01.764184 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 01:54:01.764235 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:54:01.764285 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:54:01.764363 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:54:01.764414 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 13 01:54:01.764463 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 13 01:54:01.764533 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 13 01:54:01.764587 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 13 01:54:01.764636 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 13 01:54:01.764684 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 13 01:54:01.764733 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 13 01:54:01.764781 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 13 01:54:01.764834 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 13 01:54:01.764887 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 13 01:54:01.764935 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 13 01:54:01.764984 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 13 01:54:01.765032 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 13 01:54:01.765080 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 13 01:54:01.765129 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 13 01:54:01.765180 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 13 01:54:01.765228 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 13 01:54:01.765278 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Dec 13 01:54:01.765326 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:54:01.765415 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:54:01.765464 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765516 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765565 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765612 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765660 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765708 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765756 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765803 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765877 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765944 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765993 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766041 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766089 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766137 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766186 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766234 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766283 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766371 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766423 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766472 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766520 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766568 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766616 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766663 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766711 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766762 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766810 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766857 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766906 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766956 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767006 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767054 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767103 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767152 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767203 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767252 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767300 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767378 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767428 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767476 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767525 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767572 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767625 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767672 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767720 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767767 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767814 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767867 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767916 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767964 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768012 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768062 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768110 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768158 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768205 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768254 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768302 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768374 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768425 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768474 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768525 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768573 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768622 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768670 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768719 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768768 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768820 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768909 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768958 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769006 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769058 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769107 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769155 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769205 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769254 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769302 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769386 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769436 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769485 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769536 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769583 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769631 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769679 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769727 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769777 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:54:01.769826 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Dec 13 01:54:01.769874 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:54:01.769922 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:54:01.769970 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:54:01.770025 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Dec 13 01:54:01.770075 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:54:01.770123 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:54:01.770172 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:54:01.770220 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:54:01.770269 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:54:01.770317 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:54:01.770428 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:54:01.770481 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:54:01.770531 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:54:01.770578 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:54:01.770626 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:54:01.770674 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:54:01.770722 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:54:01.770770 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:54:01.770818 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:54:01.770888 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:54:01.770940 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:54:01.772036 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:54:01.772098 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:54:01.772152 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:54:01.772204 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:54:01.772255 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:54:01.772308 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:54:01.773390 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:54:01.773448 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:54:01.773500 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:54:01.773551 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:54:01.773604 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Dec 13 01:54:01.773656 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:54:01.773706 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:54:01.773756 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:54:01.773809 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:54:01.773859 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:54:01.773909 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:54:01.773958 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:54:01.774008 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:54:01.774057 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:54:01.774106 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:54:01.774155 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:54:01.774205 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:54:01.774254 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:54:01.774306 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:54:01.774746 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:54:01.774823 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:54:01.774913 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:54:01.774963 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:54:01.775012 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:54:01.775061 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:54:01.775110 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:54:01.775159 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:54:01.775212 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:54:01.775260 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:54:01.775309 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:54:01.776395 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:54:01.776453 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:54:01.776525 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:54:01.776577 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:54:01.776626 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:54:01.776675 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:54:01.776726 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:54:01.776779 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:54:01.776828 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:54:01.776877 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:54:01.776928 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:54:01.776977 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:54:01.777026 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:54:01.777077 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:54:01.777126 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:54:01.777176 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:54:01.777228 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:54:01.777277 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:54:01.777333 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:54:01.777385 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:54:01.777435 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:54:01.777485 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:54:01.777535 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:54:01.777584 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:54:01.777634 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:54:01.777684 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:54:01.777737 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:54:01.777785 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:54:01.777839 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:54:01.777905 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:54:01.777973 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:54:01.778022 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:54:01.778071 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:54:01.778120 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:54:01.778169 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:54:01.778221 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:54:01.778270 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:54:01.778319 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:54:01.778652 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:54:01.778705 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:54:01.778757 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:54:01.778808 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:54:01.778864 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:54:01.778931 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:54:01.778980 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:54:01.779032 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:54:01.779080 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:54:01.779129 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:54:01.779178 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:54:01.779227 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:54:01.779275 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:54:01.779325 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:54:01.779410 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:54:01.779459 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:54:01.779511 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:54:01.779561 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:54:01.779643 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:54:01.779697 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:54:01.779742 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:54:01.779786 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:54:01.779839 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Dec 13 01:54:01.779885 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Dec 13 01:54:01.779972 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:54:01.780016 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:54:01.780060 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:54:01.780104 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:54:01.780167 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:54:01.780212 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:54:01.780262 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Dec 13 01:54:01.780311 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Dec 13 01:54:01.780370 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:54:01.780420 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Dec 13 01:54:01.780466 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Dec 13 01:54:01.780511 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:54:01.780560 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Dec 13 01:54:01.780605 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Dec 13 01:54:01.780653 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:54:01.780702 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Dec 13 01:54:01.780748 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:54:01.780796 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Dec 13 01:54:01.780842 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:54:01.780890 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Dec 13 01:54:01.780939 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:54:01.780988 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Dec 13 01:54:01.781035 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:54:01.781087 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Dec 13 01:54:01.781141 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:54:01.781196 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Dec 13 01:54:01.781244 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Dec 13 01:54:01.781289 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:54:01.781564 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Dec 13 01:54:01.781618 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Dec 13 01:54:01.781665 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:54:01.781716 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Dec 13 01:54:01.781766 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Dec 13 01:54:01.781814 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:54:01.781865 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Dec 13 01:54:01.781911 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:54:01.781960 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Dec 13 01:54:01.782005 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:54:01.782055 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Dec 13 01:54:01.782103 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:54:01.782151 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Dec 13 01:54:01.782196 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:54:01.782245 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Dec 13 01:54:01.782290 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:54:01.782554 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Dec 13 01:54:01.782609 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Dec 13 01:54:01.782655 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:54:01.782710 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Dec 13 01:54:01.782756 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Dec 13 01:54:01.782800 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:54:01.782849 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Dec 13 01:54:01.782894 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Dec 13 01:54:01.782945 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:54:01.782994 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Dec 13 01:54:01.783040 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:54:01.783090 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Dec 13 01:54:01.783141 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:54:01.783197 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Dec 13 01:54:01.783717 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:54:01.783803 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Dec 13 01:54:01.783856 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:54:01.783909 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Dec 13 01:54:01.783971 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:54:01.784039 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 13 01:54:01.784086 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Dec 13 01:54:01.784131 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:54:01.784182 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Dec 13 01:54:01.784227 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Dec 13 01:54:01.784271 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:54:01.784319 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Dec 13 01:54:01.784374 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:54:01.784442 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Dec 13 01:54:01.784488 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:54:01.784537 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Dec 13 01:54:01.784583 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:54:01.784632 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Dec 13 01:54:01.784694 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:54:01.784745 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Dec 13 01:54:01.784791 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:54:01.784857 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Dec 13 01:54:01.784918 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:54:01.784972 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:54:01.784981 kernel: PCI: CLS 32 bytes, default 64 Dec 13 01:54:01.784990 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:54:01.784996 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:54:01.785002 kernel: clocksource: Switched to clocksource tsc Dec 13 01:54:01.785008 kernel: Initialise system trusted keyrings Dec 13 01:54:01.785015 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:54:01.785021 kernel: Key type asymmetric registered Dec 13 01:54:01.785027 kernel: Asymmetric key parser 'x509' registered Dec 13 01:54:01.785033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:54:01.785039 kernel: io scheduler mq-deadline registered Dec 13 01:54:01.785047 kernel: io scheduler kyber registered Dec 13 01:54:01.785053 kernel: io scheduler bfq registered Dec 13 01:54:01.785105 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Dec 13 01:54:01.785155 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785206 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Dec 13 01:54:01.785256 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785306 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Dec 13 01:54:01.785363 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785675 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Dec 13 01:54:01.785747 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785799 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Dec 13 01:54:01.785849 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785899 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Dec 13 01:54:01.785954 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786007 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Dec 13 01:54:01.786058 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786107 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Dec 13 01:54:01.786156 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786207 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Dec 13 01:54:01.786259 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786308 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Dec 13 01:54:01.786370 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786422 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Dec 13 01:54:01.786471 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786520 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Dec 13 01:54:01.786569 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786622 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Dec 13 01:54:01.786671 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786720 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Dec 13 01:54:01.786769 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786824 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Dec 13 01:54:01.786877 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786926 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Dec 13 01:54:01.786975 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787025 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Dec 13 01:54:01.787075 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787125 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Dec 13 01:54:01.787176 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787226 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Dec 13 01:54:01.787275 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787324 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Dec 13 01:54:01.787380 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787429 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Dec 13 01:54:01.787482 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787531 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Dec 13 01:54:01.787596 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787897 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Dec 13 01:54:01.787958 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788011 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Dec 13 01:54:01.788067 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788119 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Dec 13 01:54:01.788169 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788218 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Dec 13 01:54:01.788268 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788318 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Dec 13 01:54:01.788404 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788454 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Dec 13 01:54:01.788503 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788553 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Dec 13 01:54:01.788614 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788669 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Dec 13 01:54:01.788719 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788769 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Dec 13 01:54:01.788823 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788874 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Dec 13 01:54:01.788945 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788957 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:54:01.788963 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:54:01.788970 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:54:01.788977 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Dec 13 01:54:01.788983 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:54:01.788991 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:54:01.789040 kernel: rtc_cmos 00:01: registered as rtc0 Dec 13 01:54:01.789089 kernel: rtc_cmos 00:01: setting system clock to 2024-12-13T01:54:01 UTC (1734054841) Dec 13 01:54:01.789134 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Dec 13 01:54:01.789143 kernel: intel_pstate: CPU model not supported Dec 13 01:54:01.789149 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:54:01.789155 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:54:01.789161 kernel: Segment Routing with IPv6 Dec 13 01:54:01.789167 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:54:01.789173 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:54:01.789180 kernel: Key type dns_resolver registered Dec 13 01:54:01.789188 kernel: IPI shorthand broadcast: enabled Dec 13 01:54:01.789194 kernel: sched_clock: Marking stable (921003688, 224926980)->(1158550255, -12619587) Dec 13 01:54:01.789200 kernel: registered taskstats version 1 Dec 13 01:54:01.789207 kernel: Loading compiled-in X.509 certificates Dec 13 01:54:01.789213 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:54:01.789219 kernel: Key type .fscrypt registered Dec 13 01:54:01.789226 kernel: Key type fscrypt-provisioning registered Dec 13 01:54:01.789232 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:54:01.789239 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:54:01.789245 kernel: ima: No architecture policies found Dec 13 01:54:01.789251 kernel: clk: Disabling unused clocks Dec 13 01:54:01.789258 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:54:01.789264 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:54:01.789270 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:54:01.789276 kernel: Run /init as init process Dec 13 01:54:01.789282 kernel: with arguments: Dec 13 01:54:01.789289 kernel: /init Dec 13 01:54:01.789295 kernel: with environment: Dec 13 01:54:01.789302 kernel: HOME=/ Dec 13 01:54:01.789886 kernel: TERM=linux Dec 13 01:54:01.789895 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:54:01.789904 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:01.789912 systemd[1]: Detected virtualization vmware. Dec 13 01:54:01.789919 systemd[1]: Detected architecture x86-64. Dec 13 01:54:01.789925 systemd[1]: Running in initrd. Dec 13 01:54:01.789931 systemd[1]: No hostname configured, using default hostname. Dec 13 01:54:01.789940 systemd[1]: Hostname set to . Dec 13 01:54:01.789946 systemd[1]: Initializing machine ID from random generator. Dec 13 01:54:01.789953 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:54:01.789959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:01.789966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:01.789973 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:54:01.789980 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:01.789986 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:54:01.789995 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:54:01.790002 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:54:01.790009 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:54:01.790015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:01.790021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:01.790028 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:01.790035 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:01.790042 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:01.790048 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:01.790054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:01.790061 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:01.790067 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:01.790073 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:01.790080 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:01.790086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:01.790094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:01.790101 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:01.790107 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:54:01.790113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:01.790120 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:54:01.790126 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:54:01.790133 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:01.790139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:01.790145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:01.790153 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:01.790172 systemd-journald[215]: Collecting audit messages is disabled. Dec 13 01:54:01.790188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:01.790195 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:54:01.790203 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:01.790210 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:54:01.790216 kernel: Bridge firewalling registered Dec 13 01:54:01.790223 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:01.790231 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:01.790237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:01.790244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:01.790251 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:01.790257 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:01.790264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:01.790270 systemd-journald[215]: Journal started Dec 13 01:54:01.790286 systemd-journald[215]: Runtime Journal (/run/log/journal/0b3c11c25a7f45ec8c36c712cc2d7939) is 4.8M, max 38.6M, 33.8M free. Dec 13 01:54:01.736250 systemd-modules-load[216]: Inserted module 'overlay' Dec 13 01:54:01.757694 systemd-modules-load[216]: Inserted module 'br_netfilter' Dec 13 01:54:01.791500 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:01.791804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:01.792004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:01.796493 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:54:01.798314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:01.802801 dracut-cmdline[245]: dracut-dracut-053 Dec 13 01:54:01.804109 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:54:01.805135 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:01.809438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:01.825289 systemd-resolved[264]: Positive Trust Anchors: Dec 13 01:54:01.825300 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:01.825322 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:01.827452 systemd-resolved[264]: Defaulting to hostname 'linux'. Dec 13 01:54:01.828537 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:01.828670 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:01.848342 kernel: SCSI subsystem initialized Dec 13 01:54:01.854337 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:54:01.860344 kernel: iscsi: registered transport (tcp) Dec 13 01:54:01.873353 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:54:01.873370 kernel: QLogic iSCSI HBA Driver Dec 13 01:54:01.892470 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:01.897561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:54:01.911859 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:54:01.911887 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:54:01.912951 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:54:01.943367 kernel: raid6: avx2x4 gen() 52690 MB/s Dec 13 01:54:01.960340 kernel: raid6: avx2x2 gen() 52602 MB/s Dec 13 01:54:01.977577 kernel: raid6: avx2x1 gen() 45146 MB/s Dec 13 01:54:01.977595 kernel: raid6: using algorithm avx2x4 gen() 52690 MB/s Dec 13 01:54:01.995591 kernel: raid6: .... xor() 21829 MB/s, rmw enabled Dec 13 01:54:01.995629 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:54:02.008341 kernel: xor: automatically using best checksumming function avx Dec 13 01:54:02.106349 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:54:02.111397 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:02.116430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:02.123711 systemd-udevd[432]: Using default interface naming scheme 'v255'. Dec 13 01:54:02.126177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:02.135570 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:54:02.142050 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Dec 13 01:54:02.156763 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:02.161495 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:02.230384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:02.234423 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:54:02.240595 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:02.241082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:02.241625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:02.241943 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:02.242867 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:54:02.253171 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:02.296360 kernel: VMware PVSCSI driver - version 1.0.7.0-k Dec 13 01:54:02.303064 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Dec 13 01:54:02.309341 kernel: vmw_pvscsi: using 64bit dma Dec 13 01:54:02.312978 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Dec 13 01:54:02.323702 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Dec 13 01:54:02.323793 kernel: vmw_pvscsi: max_id: 16 Dec 13 01:54:02.323802 kernel: vmw_pvscsi: setting ring_pages to 8 Dec 13 01:54:02.323810 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:54:02.323817 kernel: vmw_pvscsi: enabling reqCallThreshold Dec 13 01:54:02.323824 kernel: vmw_pvscsi: driver-based request coalescing enabled Dec 13 01:54:02.323832 kernel: vmw_pvscsi: using MSI-X Dec 13 01:54:02.323839 kernel: libata version 3.00 loaded. Dec 13 01:54:02.325851 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Dec 13 01:54:02.325877 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Dec 13 01:54:02.328593 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:02.330531 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Dec 13 01:54:02.330619 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Dec 13 01:54:02.328685 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:02.332861 kernel: ata_piix 0000:00:07.1: version 2.13 Dec 13 01:54:02.341433 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:54:02.341445 kernel: AES CTR mode by8 optimization enabled Dec 13 01:54:02.341453 kernel: scsi host1: ata_piix Dec 13 01:54:02.341528 kernel: scsi host2: ata_piix Dec 13 01:54:02.341589 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Dec 13 01:54:02.341598 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Dec 13 01:54:02.331063 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:02.331166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:02.331238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:02.331381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:02.336512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:02.352721 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:02.356554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:02.364690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:02.510344 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Dec 13 01:54:02.516340 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Dec 13 01:54:02.527432 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Dec 13 01:54:02.534151 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:54:02.534221 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Dec 13 01:54:02.534283 kernel: sd 0:0:0:0: [sda] Cache data unavailable Dec 13 01:54:02.534361 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Dec 13 01:54:02.534422 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Dec 13 01:54:02.542193 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:54:02.542203 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:54:02.542210 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:54:02.542285 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:54:02.597346 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (493) Dec 13 01:54:02.604356 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (485) Dec 13 01:54:02.604925 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Dec 13 01:54:02.607938 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Dec 13 01:54:02.611467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 01:54:02.613675 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Dec 13 01:54:02.613939 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Dec 13 01:54:02.618460 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:54:02.645355 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:54:02.653430 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:54:03.652380 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:54:03.652753 disk-uuid[589]: The operation has completed successfully. Dec 13 01:54:03.689606 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:54:03.689667 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:54:03.694433 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:54:03.696126 sh[605]: Success Dec 13 01:54:03.704345 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:54:03.758905 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:54:03.760415 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:54:03.760790 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:54:03.779920 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:54:03.779952 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:54:03.779963 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:54:03.781381 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:54:03.783066 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:54:03.789343 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:54:03.790314 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:54:03.799540 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Dec 13 01:54:03.800660 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:54:03.825364 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:03.825393 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:54:03.825405 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:54:03.841343 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:54:03.848289 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:54:03.849342 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:03.851853 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:54:03.856297 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:54:03.869535 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:54:03.876508 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:54:03.929049 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:03.933466 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:03.945180 systemd-networkd[795]: lo: Link UP Dec 13 01:54:03.945188 systemd-networkd[795]: lo: Gained carrier Dec 13 01:54:03.945924 systemd-networkd[795]: Enumeration completed Dec 13 01:54:03.946102 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:03.946198 systemd-networkd[795]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Dec 13 01:54:03.946272 systemd[1]: Reached target network.target - Network. Dec 13 01:54:03.948388 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 01:54:03.948514 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 01:54:03.949519 systemd-networkd[795]: ens192: Link UP Dec 13 01:54:03.949526 systemd-networkd[795]: ens192: Gained carrier Dec 13 01:54:04.102623 ignition[666]: Ignition 2.19.0 Dec 13 01:54:04.102630 ignition[666]: Stage: fetch-offline Dec 13 01:54:04.102672 ignition[666]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.102682 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.102744 ignition[666]: parsed url from cmdline: "" Dec 13 01:54:04.102746 ignition[666]: no config URL provided Dec 13 01:54:04.102749 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:54:04.102753 ignition[666]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:54:04.103153 ignition[666]: config successfully fetched Dec 13 01:54:04.103169 ignition[666]: parsing config with SHA512: 48bef6767cada34c8ef0038ef90b37de95817c2f66967f5df83d0a75bbcc506eebea54509a3af115996732a380e42de9e888b96353df975b4aab7f6fdb14c00e Dec 13 01:54:04.105569 unknown[666]: fetched base config from "system" Dec 13 01:54:04.105574 unknown[666]: fetched user config from "vmware" Dec 13 01:54:04.105843 ignition[666]: fetch-offline: fetch-offline passed Dec 13 01:54:04.106500 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:04.105881 ignition[666]: Ignition finished successfully Dec 13 01:54:04.106882 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:54:04.111460 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:54:04.118805 ignition[803]: Ignition 2.19.0 Dec 13 01:54:04.118812 ignition[803]: Stage: kargs Dec 13 01:54:04.118923 ignition[803]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.118929 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.119445 ignition[803]: kargs: kargs passed Dec 13 01:54:04.119472 ignition[803]: Ignition finished successfully Dec 13 01:54:04.120741 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:54:04.127413 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:54:04.134359 ignition[810]: Ignition 2.19.0 Dec 13 01:54:04.134369 ignition[810]: Stage: disks Dec 13 01:54:04.134470 ignition[810]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.134476 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.134996 ignition[810]: disks: disks passed Dec 13 01:54:04.135023 ignition[810]: Ignition finished successfully Dec 13 01:54:04.135818 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:54:04.135988 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:04.136114 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:04.136304 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:04.136495 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:04.136666 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:04.141409 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:54:04.436872 systemd-fsck[818]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:54:04.450829 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:54:04.454405 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:54:04.592993 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:54:04.593388 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:54:04.593357 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:04.597428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:04.598672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:54:04.598942 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:54:04.598967 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:54:04.598981 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:04.602057 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:54:04.606426 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (826) Dec 13 01:54:04.609273 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:04.609291 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:54:04.609299 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:54:04.612919 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:54:04.611946 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:54:04.613873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:04.636759 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:54:04.639075 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:54:04.641166 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:54:04.643599 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:54:04.695082 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:04.699391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:54:04.700785 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:54:04.704455 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:04.715800 ignition[938]: INFO : Ignition 2.19.0 Dec 13 01:54:04.715800 ignition[938]: INFO : Stage: mount Dec 13 01:54:04.715800 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.715800 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.716316 ignition[938]: INFO : mount: mount passed Dec 13 01:54:04.716316 ignition[938]: INFO : Ignition finished successfully Dec 13 01:54:04.716660 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:54:04.720441 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:54:04.721774 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:54:04.777606 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:54:04.782460 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:04.790343 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (950) Dec 13 01:54:04.793288 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:04.793308 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:54:04.793319 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:54:04.797344 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:54:04.798671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:04.815449 ignition[967]: INFO : Ignition 2.19.0 Dec 13 01:54:04.815969 ignition[967]: INFO : Stage: files Dec 13 01:54:04.816255 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.817250 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.817250 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:54:04.818097 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:54:04.818097 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:54:04.820362 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:54:04.820672 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:54:04.821055 unknown[967]: wrote ssh authorized keys file for user: core Dec 13 01:54:04.821370 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:54:04.823244 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:54:04.823244 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:54:04.880943 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:54:05.129649 systemd-networkd[795]: ens192: Gained IPv6LL Dec 13 01:54:05.474576 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:54:05.914163 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:54:05.914163 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:54:05.914971 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:54:05.951514 ignition[967]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:54:05.953722 ignition[967]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:54:05.953722 ignition[967]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:54:05.953722 ignition[967]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:54:05.953722 ignition[967]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:54:05.954256 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:05.954256 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:05.954256 ignition[967]: INFO : files: files passed Dec 13 01:54:05.954256 ignition[967]: INFO : Ignition finished successfully Dec 13 01:54:05.955079 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:54:05.957436 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:54:05.959232 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:54:05.960344 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:54:05.960393 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:54:05.964764 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:05.964764 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:05.965836 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:05.966593 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:05.967046 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:54:05.970438 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:54:05.984153 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:54:05.984213 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:54:05.984515 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:54:05.984635 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:54:05.984848 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:54:05.985287 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:54:05.994002 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:06.000512 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:54:06.006980 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:06.007297 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:06.007497 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:54:06.007648 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:54:06.007729 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:06.008050 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:54:06.008229 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:54:06.008464 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:54:06.008712 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:06.008987 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:06.009226 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:54:06.009520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:06.010005 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:54:06.010286 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:54:06.010552 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:54:06.010751 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:54:06.010820 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:06.011146 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:06.011315 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:06.011523 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:54:06.011568 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:06.011718 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:54:06.011781 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:06.012040 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:54:06.012106 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:06.012392 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:54:06.012561 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:54:06.016346 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:06.016511 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:54:06.016708 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:54:06.016891 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:54:06.016957 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:06.017165 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:54:06.017209 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:06.017414 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:54:06.017493 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:06.017706 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:54:06.017781 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:54:06.022420 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:54:06.022510 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:54:06.022570 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:06.024448 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:54:06.024641 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:54:06.024729 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:06.025002 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:54:06.025079 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:06.027615 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:54:06.027672 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:54:06.032184 ignition[1023]: INFO : Ignition 2.19.0 Dec 13 01:54:06.032184 ignition[1023]: INFO : Stage: umount Dec 13 01:54:06.032184 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:06.032184 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:06.033674 ignition[1023]: INFO : umount: umount passed Dec 13 01:54:06.034143 ignition[1023]: INFO : Ignition finished successfully Dec 13 01:54:06.034385 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:54:06.035088 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:54:06.035526 systemd[1]: Stopped target network.target - Network. Dec 13 01:54:06.035628 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:54:06.035658 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:54:06.035805 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:54:06.035827 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:54:06.035964 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:54:06.035983 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:54:06.036262 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:54:06.036283 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:06.037141 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:54:06.037646 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:54:06.038485 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:54:06.045289 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:54:06.045503 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:54:06.046506 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:54:06.046684 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:54:06.047257 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:54:06.047282 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:06.050403 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:54:06.050503 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:54:06.050529 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:06.050660 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Dec 13 01:54:06.050681 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:54:06.050804 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:54:06.050825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:06.050937 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:54:06.050958 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:06.051070 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:54:06.051091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:06.051249 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:06.057239 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:54:06.057302 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:54:06.062770 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:54:06.062855 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:06.063139 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:54:06.063166 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:06.063378 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:54:06.063395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:06.063550 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:54:06.063573 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:06.063835 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:54:06.063857 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:06.064071 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:06.064091 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:06.068487 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:54:06.068579 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:54:06.068606 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:06.068720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:06.068741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:06.071472 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:54:06.071535 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:54:06.106298 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:54:06.106402 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:54:06.106891 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:54:06.107049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:54:06.107088 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:06.111439 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:54:06.123414 systemd[1]: Switching root. Dec 13 01:54:06.158125 systemd-journald[215]: Journal stopped Dec 13 01:54:01.734209 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:54:01.734225 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:54:01.734232 kernel: Disabled fast string operations Dec 13 01:54:01.734236 kernel: BIOS-provided physical RAM map: Dec 13 01:54:01.734240 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Dec 13 01:54:01.734244 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Dec 13 01:54:01.734250 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Dec 13 01:54:01.734254 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Dec 13 01:54:01.734258 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Dec 13 01:54:01.734263 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Dec 13 01:54:01.734267 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Dec 13 01:54:01.734271 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Dec 13 01:54:01.734275 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Dec 13 01:54:01.734279 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 01:54:01.734286 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Dec 13 01:54:01.734291 kernel: NX (Execute Disable) protection: active Dec 13 01:54:01.734295 kernel: APIC: Static calls initialized Dec 13 01:54:01.734300 kernel: SMBIOS 2.7 present. Dec 13 01:54:01.734305 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Dec 13 01:54:01.734310 kernel: vmware: hypercall mode: 0x00 Dec 13 01:54:01.734315 kernel: Hypervisor detected: VMware Dec 13 01:54:01.734319 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Dec 13 01:54:01.734325 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Dec 13 01:54:01.734337 kernel: vmware: using clock offset of 2525555316 ns Dec 13 01:54:01.734342 kernel: tsc: Detected 3408.000 MHz processor Dec 13 01:54:01.734347 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:54:01.734352 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:54:01.734357 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Dec 13 01:54:01.734362 kernel: total RAM covered: 3072M Dec 13 01:54:01.734367 kernel: Found optimal setting for mtrr clean up Dec 13 01:54:01.734372 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Dec 13 01:54:01.734379 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Dec 13 01:54:01.734383 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:54:01.734388 kernel: Using GB pages for direct mapping Dec 13 01:54:01.734393 kernel: ACPI: Early table checksum verification disabled Dec 13 01:54:01.734398 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Dec 13 01:54:01.734403 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Dec 13 01:54:01.734408 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Dec 13 01:54:01.734412 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Dec 13 01:54:01.734417 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:54:01.734425 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 13 01:54:01.734430 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Dec 13 01:54:01.734435 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Dec 13 01:54:01.734440 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Dec 13 01:54:01.734446 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Dec 13 01:54:01.734452 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Dec 13 01:54:01.734457 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Dec 13 01:54:01.734462 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Dec 13 01:54:01.734467 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Dec 13 01:54:01.734472 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:54:01.734477 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 13 01:54:01.734483 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Dec 13 01:54:01.734488 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Dec 13 01:54:01.734493 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Dec 13 01:54:01.734498 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Dec 13 01:54:01.734504 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Dec 13 01:54:01.734509 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Dec 13 01:54:01.734514 kernel: system APIC only can use physical flat Dec 13 01:54:01.734519 kernel: APIC: Switched APIC routing to: physical flat Dec 13 01:54:01.734524 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:54:01.734529 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 01:54:01.734534 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 01:54:01.734539 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 01:54:01.734544 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 01:54:01.734550 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 01:54:01.734556 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 01:54:01.734561 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 01:54:01.734566 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Dec 13 01:54:01.734571 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Dec 13 01:54:01.734576 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Dec 13 01:54:01.734581 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Dec 13 01:54:01.734586 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Dec 13 01:54:01.734591 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Dec 13 01:54:01.734596 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Dec 13 01:54:01.734601 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Dec 13 01:54:01.734607 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Dec 13 01:54:01.734612 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Dec 13 01:54:01.734617 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Dec 13 01:54:01.734622 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Dec 13 01:54:01.734627 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Dec 13 01:54:01.734632 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Dec 13 01:54:01.734637 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Dec 13 01:54:01.734642 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Dec 13 01:54:01.734647 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Dec 13 01:54:01.734652 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Dec 13 01:54:01.734658 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Dec 13 01:54:01.734663 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Dec 13 01:54:01.734668 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Dec 13 01:54:01.734673 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Dec 13 01:54:01.734678 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Dec 13 01:54:01.734683 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Dec 13 01:54:01.734688 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Dec 13 01:54:01.734693 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Dec 13 01:54:01.734698 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Dec 13 01:54:01.734703 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Dec 13 01:54:01.734709 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Dec 13 01:54:01.734715 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Dec 13 01:54:01.734720 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Dec 13 01:54:01.734725 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Dec 13 01:54:01.734730 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Dec 13 01:54:01.734735 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Dec 13 01:54:01.734740 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Dec 13 01:54:01.734745 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Dec 13 01:54:01.734750 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Dec 13 01:54:01.734755 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Dec 13 01:54:01.734761 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Dec 13 01:54:01.734766 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Dec 13 01:54:01.734771 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Dec 13 01:54:01.734776 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Dec 13 01:54:01.734781 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Dec 13 01:54:01.734786 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Dec 13 01:54:01.734791 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Dec 13 01:54:01.734796 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Dec 13 01:54:01.734801 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Dec 13 01:54:01.734806 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Dec 13 01:54:01.734812 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Dec 13 01:54:01.734817 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Dec 13 01:54:01.734822 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Dec 13 01:54:01.734831 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Dec 13 01:54:01.734837 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Dec 13 01:54:01.734843 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Dec 13 01:54:01.734848 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Dec 13 01:54:01.734853 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Dec 13 01:54:01.734859 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Dec 13 01:54:01.734865 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Dec 13 01:54:01.734870 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Dec 13 01:54:01.734876 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Dec 13 01:54:01.734881 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Dec 13 01:54:01.734886 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Dec 13 01:54:01.734892 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Dec 13 01:54:01.734897 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Dec 13 01:54:01.734903 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Dec 13 01:54:01.734908 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Dec 13 01:54:01.734913 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Dec 13 01:54:01.734920 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Dec 13 01:54:01.734925 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Dec 13 01:54:01.734931 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Dec 13 01:54:01.734936 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Dec 13 01:54:01.734941 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Dec 13 01:54:01.734947 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Dec 13 01:54:01.734952 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Dec 13 01:54:01.734957 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Dec 13 01:54:01.734963 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Dec 13 01:54:01.734968 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Dec 13 01:54:01.734975 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Dec 13 01:54:01.734980 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Dec 13 01:54:01.734985 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Dec 13 01:54:01.734991 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Dec 13 01:54:01.734996 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Dec 13 01:54:01.735001 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Dec 13 01:54:01.735007 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Dec 13 01:54:01.735012 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Dec 13 01:54:01.735017 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Dec 13 01:54:01.735022 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Dec 13 01:54:01.735028 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Dec 13 01:54:01.735034 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Dec 13 01:54:01.735040 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Dec 13 01:54:01.735045 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Dec 13 01:54:01.735050 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Dec 13 01:54:01.735056 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Dec 13 01:54:01.735061 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Dec 13 01:54:01.735066 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Dec 13 01:54:01.735072 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Dec 13 01:54:01.735077 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Dec 13 01:54:01.735082 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Dec 13 01:54:01.735089 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Dec 13 01:54:01.735094 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Dec 13 01:54:01.735099 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Dec 13 01:54:01.735105 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Dec 13 01:54:01.735110 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Dec 13 01:54:01.735115 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Dec 13 01:54:01.735121 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Dec 13 01:54:01.735126 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Dec 13 01:54:01.735132 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Dec 13 01:54:01.735137 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Dec 13 01:54:01.735143 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Dec 13 01:54:01.735149 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Dec 13 01:54:01.735154 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Dec 13 01:54:01.735159 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Dec 13 01:54:01.735398 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Dec 13 01:54:01.735405 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Dec 13 01:54:01.735410 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Dec 13 01:54:01.735415 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Dec 13 01:54:01.735421 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Dec 13 01:54:01.735426 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Dec 13 01:54:01.735434 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Dec 13 01:54:01.735439 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Dec 13 01:54:01.735445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:54:01.735450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 01:54:01.735456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Dec 13 01:54:01.735461 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Dec 13 01:54:01.735467 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Dec 13 01:54:01.735472 kernel: Zone ranges: Dec 13 01:54:01.735478 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:54:01.735483 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Dec 13 01:54:01.735490 kernel: Normal empty Dec 13 01:54:01.735495 kernel: Movable zone start for each node Dec 13 01:54:01.735501 kernel: Early memory node ranges Dec 13 01:54:01.735506 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Dec 13 01:54:01.735512 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Dec 13 01:54:01.735517 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Dec 13 01:54:01.735522 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Dec 13 01:54:01.735528 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:54:01.735533 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Dec 13 01:54:01.735540 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Dec 13 01:54:01.735546 kernel: ACPI: PM-Timer IO Port: 0x1008 Dec 13 01:54:01.735551 kernel: system APIC only can use physical flat Dec 13 01:54:01.735556 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Dec 13 01:54:01.735562 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 01:54:01.735568 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 01:54:01.735573 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 01:54:01.735578 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 01:54:01.735584 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 01:54:01.735589 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 01:54:01.735596 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 01:54:01.735601 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 01:54:01.735607 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 01:54:01.735612 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 01:54:01.735618 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 01:54:01.735623 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 01:54:01.735628 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 01:54:01.735634 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 01:54:01.735639 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 01:54:01.735645 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 01:54:01.735651 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Dec 13 01:54:01.735656 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Dec 13 01:54:01.735662 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Dec 13 01:54:01.735667 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Dec 13 01:54:01.735672 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Dec 13 01:54:01.735678 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Dec 13 01:54:01.735683 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Dec 13 01:54:01.735689 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Dec 13 01:54:01.735694 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Dec 13 01:54:01.735701 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Dec 13 01:54:01.735706 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Dec 13 01:54:01.735711 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Dec 13 01:54:01.735717 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Dec 13 01:54:01.735722 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Dec 13 01:54:01.735728 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Dec 13 01:54:01.735733 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Dec 13 01:54:01.735739 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Dec 13 01:54:01.735744 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Dec 13 01:54:01.735749 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Dec 13 01:54:01.735756 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Dec 13 01:54:01.735761 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Dec 13 01:54:01.735766 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Dec 13 01:54:01.735772 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Dec 13 01:54:01.735777 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Dec 13 01:54:01.735783 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Dec 13 01:54:01.735788 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Dec 13 01:54:01.735793 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Dec 13 01:54:01.735799 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Dec 13 01:54:01.735805 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Dec 13 01:54:01.735811 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Dec 13 01:54:01.735819 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Dec 13 01:54:01.735825 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Dec 13 01:54:01.735830 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Dec 13 01:54:01.735836 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Dec 13 01:54:01.735841 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Dec 13 01:54:01.735846 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Dec 13 01:54:01.735852 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Dec 13 01:54:01.735857 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Dec 13 01:54:01.735864 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Dec 13 01:54:01.735869 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Dec 13 01:54:01.735875 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Dec 13 01:54:01.735880 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Dec 13 01:54:01.735885 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Dec 13 01:54:01.735891 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Dec 13 01:54:01.735896 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Dec 13 01:54:01.735902 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Dec 13 01:54:01.735907 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Dec 13 01:54:01.735912 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Dec 13 01:54:01.735919 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Dec 13 01:54:01.735924 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Dec 13 01:54:01.735930 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Dec 13 01:54:01.735935 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Dec 13 01:54:01.735940 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Dec 13 01:54:01.735946 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Dec 13 01:54:01.735951 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Dec 13 01:54:01.735957 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Dec 13 01:54:01.735962 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Dec 13 01:54:01.735968 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Dec 13 01:54:01.735974 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Dec 13 01:54:01.735979 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Dec 13 01:54:01.735985 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Dec 13 01:54:01.735990 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Dec 13 01:54:01.735996 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Dec 13 01:54:01.736001 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Dec 13 01:54:01.736006 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Dec 13 01:54:01.736012 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Dec 13 01:54:01.736017 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Dec 13 01:54:01.740352 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Dec 13 01:54:01.740362 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Dec 13 01:54:01.740368 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Dec 13 01:54:01.740374 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Dec 13 01:54:01.740379 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Dec 13 01:54:01.740385 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Dec 13 01:54:01.740390 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Dec 13 01:54:01.740396 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Dec 13 01:54:01.740401 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Dec 13 01:54:01.740407 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Dec 13 01:54:01.740414 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Dec 13 01:54:01.740420 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Dec 13 01:54:01.740425 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Dec 13 01:54:01.740431 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Dec 13 01:54:01.740436 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Dec 13 01:54:01.740442 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Dec 13 01:54:01.740447 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Dec 13 01:54:01.740453 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Dec 13 01:54:01.740458 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Dec 13 01:54:01.740464 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Dec 13 01:54:01.740471 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Dec 13 01:54:01.740476 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Dec 13 01:54:01.740482 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Dec 13 01:54:01.740487 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Dec 13 01:54:01.740492 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Dec 13 01:54:01.740498 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Dec 13 01:54:01.740504 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Dec 13 01:54:01.740509 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Dec 13 01:54:01.740515 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Dec 13 01:54:01.740521 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Dec 13 01:54:01.740527 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Dec 13 01:54:01.740532 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Dec 13 01:54:01.740538 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Dec 13 01:54:01.740543 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Dec 13 01:54:01.740549 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Dec 13 01:54:01.740554 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Dec 13 01:54:01.740560 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Dec 13 01:54:01.740565 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Dec 13 01:54:01.740571 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Dec 13 01:54:01.740578 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Dec 13 01:54:01.740584 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Dec 13 01:54:01.740589 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Dec 13 01:54:01.740595 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Dec 13 01:54:01.740600 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Dec 13 01:54:01.740606 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:54:01.740611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Dec 13 01:54:01.740617 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:54:01.740623 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Dec 13 01:54:01.740629 kernel: TSC deadline timer available Dec 13 01:54:01.740635 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Dec 13 01:54:01.740641 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Dec 13 01:54:01.740646 kernel: Booting paravirtualized kernel on VMware hypervisor Dec 13 01:54:01.740652 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:54:01.740658 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Dec 13 01:54:01.740664 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 01:54:01.740669 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 01:54:01.740675 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Dec 13 01:54:01.740682 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Dec 13 01:54:01.740687 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Dec 13 01:54:01.740693 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Dec 13 01:54:01.740698 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Dec 13 01:54:01.740711 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Dec 13 01:54:01.740718 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Dec 13 01:54:01.740724 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Dec 13 01:54:01.740731 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Dec 13 01:54:01.740737 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Dec 13 01:54:01.740744 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Dec 13 01:54:01.740749 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Dec 13 01:54:01.740755 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Dec 13 01:54:01.740761 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Dec 13 01:54:01.740767 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Dec 13 01:54:01.740772 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Dec 13 01:54:01.740779 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:54:01.740785 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:54:01.740793 kernel: random: crng init done Dec 13 01:54:01.740798 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 01:54:01.740804 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Dec 13 01:54:01.740810 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 01:54:01.740816 kernel: printk: log_buf_len: 1048576 bytes Dec 13 01:54:01.740822 kernel: printk: early log buf free: 239648(91%) Dec 13 01:54:01.740828 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:54:01.740834 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:54:01.740840 kernel: Fallback order for Node 0: 0 Dec 13 01:54:01.740847 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Dec 13 01:54:01.740853 kernel: Policy zone: DMA32 Dec 13 01:54:01.740859 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:54:01.740865 kernel: Memory: 1936372K/2096628K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 159996K reserved, 0K cma-reserved) Dec 13 01:54:01.740872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Dec 13 01:54:01.740879 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:54:01.740885 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:54:01.740891 kernel: Dynamic Preempt: voluntary Dec 13 01:54:01.740897 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:54:01.740903 kernel: rcu: RCU event tracing is enabled. Dec 13 01:54:01.740909 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Dec 13 01:54:01.740915 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:54:01.740921 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:54:01.740927 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:54:01.740933 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:54:01.740940 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Dec 13 01:54:01.740946 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Dec 13 01:54:01.740952 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Dec 13 01:54:01.740958 kernel: Console: colour VGA+ 80x25 Dec 13 01:54:01.740964 kernel: printk: console [tty0] enabled Dec 13 01:54:01.740970 kernel: printk: console [ttyS0] enabled Dec 13 01:54:01.740976 kernel: ACPI: Core revision 20230628 Dec 13 01:54:01.740982 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Dec 13 01:54:01.740988 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:54:01.740995 kernel: x2apic enabled Dec 13 01:54:01.741001 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:54:01.741007 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:54:01.741013 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:54:01.741019 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Dec 13 01:54:01.741025 kernel: Disabled fast string operations Dec 13 01:54:01.741031 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:54:01.741037 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:54:01.741043 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:54:01.741050 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:54:01.741056 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:54:01.741061 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 01:54:01.741067 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:54:01.741073 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 01:54:01.741080 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 01:54:01.741085 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:54:01.741091 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:54:01.741097 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:54:01.741105 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 01:54:01.741111 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:54:01.741116 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:54:01.741122 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:54:01.741128 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:54:01.741134 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:54:01.741141 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:54:01.741148 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:54:01.741153 kernel: pid_max: default: 131072 minimum: 1024 Dec 13 01:54:01.741161 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:54:01.741167 kernel: landlock: Up and running. Dec 13 01:54:01.741173 kernel: SELinux: Initializing. Dec 13 01:54:01.741179 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:54:01.741185 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:54:01.741191 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 01:54:01.741197 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:54:01.741203 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:54:01.741210 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 13 01:54:01.741216 kernel: Performance Events: Skylake events, core PMU driver. Dec 13 01:54:01.741222 kernel: core: CPUID marked event: 'cpu cycles' unavailable Dec 13 01:54:01.741228 kernel: core: CPUID marked event: 'instructions' unavailable Dec 13 01:54:01.741234 kernel: core: CPUID marked event: 'bus cycles' unavailable Dec 13 01:54:01.741240 kernel: core: CPUID marked event: 'cache references' unavailable Dec 13 01:54:01.741246 kernel: core: CPUID marked event: 'cache misses' unavailable Dec 13 01:54:01.741251 kernel: core: CPUID marked event: 'branch instructions' unavailable Dec 13 01:54:01.741257 kernel: core: CPUID marked event: 'branch misses' unavailable Dec 13 01:54:01.741264 kernel: ... version: 1 Dec 13 01:54:01.741270 kernel: ... bit width: 48 Dec 13 01:54:01.741276 kernel: ... generic registers: 4 Dec 13 01:54:01.741282 kernel: ... value mask: 0000ffffffffffff Dec 13 01:54:01.741287 kernel: ... max period: 000000007fffffff Dec 13 01:54:01.741293 kernel: ... fixed-purpose events: 0 Dec 13 01:54:01.741299 kernel: ... event mask: 000000000000000f Dec 13 01:54:01.741305 kernel: signal: max sigframe size: 1776 Dec 13 01:54:01.741311 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:54:01.741318 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:54:01.741324 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:54:01.741367 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:54:01.741374 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:54:01.741380 kernel: .... node #0, CPUs: #1 Dec 13 01:54:01.741386 kernel: Disabled fast string operations Dec 13 01:54:01.741391 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Dec 13 01:54:01.741397 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 01:54:01.741403 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:54:01.741409 kernel: smpboot: Max logical packages: 128 Dec 13 01:54:01.741417 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Dec 13 01:54:01.741423 kernel: devtmpfs: initialized Dec 13 01:54:01.741429 kernel: x86/mm: Memory block size: 128MB Dec 13 01:54:01.741435 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Dec 13 01:54:01.741441 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:54:01.741447 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 01:54:01.741453 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:54:01.741459 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:54:01.741465 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:54:01.741472 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:54:01.741478 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:54:01.741484 kernel: audit: type=2000 audit(1734054840.081:1): state=initialized audit_enabled=0 res=1 Dec 13 01:54:01.741489 kernel: cpuidle: using governor menu Dec 13 01:54:01.741496 kernel: Simple Boot Flag at 0x36 set to 0x80 Dec 13 01:54:01.741502 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:54:01.741508 kernel: dca service started, version 1.12.1 Dec 13 01:54:01.741514 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Dec 13 01:54:01.741520 kernel: PCI: Using configuration type 1 for base access Dec 13 01:54:01.741527 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:54:01.741533 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:54:01.741540 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:54:01.741546 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:54:01.741552 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:54:01.741558 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:54:01.741563 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:54:01.741569 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:54:01.741575 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:54:01.741582 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:54:01.741588 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 01:54:01.741594 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:54:01.741600 kernel: ACPI: Interpreter enabled Dec 13 01:54:01.741606 kernel: ACPI: PM: (supports S0 S1 S5) Dec 13 01:54:01.741612 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:54:01.741618 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:54:01.741624 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:54:01.741630 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Dec 13 01:54:01.741637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Dec 13 01:54:01.741720 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:54:01.741776 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Dec 13 01:54:01.741829 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Dec 13 01:54:01.741838 kernel: PCI host bridge to bus 0000:00 Dec 13 01:54:01.741889 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:54:01.741937 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Dec 13 01:54:01.741981 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:54:01.742025 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:54:01.742069 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Dec 13 01:54:01.742112 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Dec 13 01:54:01.742173 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Dec 13 01:54:01.742232 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Dec 13 01:54:01.742290 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Dec 13 01:54:01.742362 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Dec 13 01:54:01.742426 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Dec 13 01:54:01.742476 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 01:54:01.742526 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 01:54:01.742576 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 01:54:01.742628 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 01:54:01.742682 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Dec 13 01:54:01.742732 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Dec 13 01:54:01.742781 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Dec 13 01:54:01.742835 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Dec 13 01:54:01.742900 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Dec 13 01:54:01.742949 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Dec 13 01:54:01.743004 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Dec 13 01:54:01.743054 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Dec 13 01:54:01.743102 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Dec 13 01:54:01.743150 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Dec 13 01:54:01.743198 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Dec 13 01:54:01.743246 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:54:01.743298 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Dec 13 01:54:01.745361 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745422 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745481 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745532 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745585 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745635 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745691 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745740 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745793 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745842 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.745895 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.745964 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.746036 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.746085 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.746137 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.746186 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.746238 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.746288 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747520 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.747579 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747634 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.747685 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747738 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.747792 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747849 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.747900 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.747953 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.748003 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.748056 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.748107 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.748197 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.748251 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.748305 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749551 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.749611 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749662 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.749721 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749771 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.749824 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749873 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.749926 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.749976 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750033 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750083 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750137 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750186 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750239 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750289 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750354 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750410 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750462 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750512 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750567 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750616 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750668 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750720 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750774 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750842 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.750912 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.750961 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.751014 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.751065 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.751117 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Dec 13 01:54:01.751166 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.751216 kernel: pci_bus 0000:01: extended config space not accessible Dec 13 01:54:01.751268 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:54:01.751318 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 01:54:01.753360 kernel: acpiphp: Slot [32] registered Dec 13 01:54:01.753394 kernel: acpiphp: Slot [33] registered Dec 13 01:54:01.753421 kernel: acpiphp: Slot [34] registered Dec 13 01:54:01.753446 kernel: acpiphp: Slot [35] registered Dec 13 01:54:01.753472 kernel: acpiphp: Slot [36] registered Dec 13 01:54:01.753496 kernel: acpiphp: Slot [37] registered Dec 13 01:54:01.753524 kernel: acpiphp: Slot [38] registered Dec 13 01:54:01.753551 kernel: acpiphp: Slot [39] registered Dec 13 01:54:01.753574 kernel: acpiphp: Slot [40] registered Dec 13 01:54:01.753599 kernel: acpiphp: Slot [41] registered Dec 13 01:54:01.753628 kernel: acpiphp: Slot [42] registered Dec 13 01:54:01.753635 kernel: acpiphp: Slot [43] registered Dec 13 01:54:01.753641 kernel: acpiphp: Slot [44] registered Dec 13 01:54:01.753646 kernel: acpiphp: Slot [45] registered Dec 13 01:54:01.753652 kernel: acpiphp: Slot [46] registered Dec 13 01:54:01.753658 kernel: acpiphp: Slot [47] registered Dec 13 01:54:01.753664 kernel: acpiphp: Slot [48] registered Dec 13 01:54:01.753669 kernel: acpiphp: Slot [49] registered Dec 13 01:54:01.753675 kernel: acpiphp: Slot [50] registered Dec 13 01:54:01.753683 kernel: acpiphp: Slot [51] registered Dec 13 01:54:01.753689 kernel: acpiphp: Slot [52] registered Dec 13 01:54:01.753695 kernel: acpiphp: Slot [53] registered Dec 13 01:54:01.753701 kernel: acpiphp: Slot [54] registered Dec 13 01:54:01.753707 kernel: acpiphp: Slot [55] registered Dec 13 01:54:01.753713 kernel: acpiphp: Slot [56] registered Dec 13 01:54:01.753718 kernel: acpiphp: Slot [57] registered Dec 13 01:54:01.753724 kernel: acpiphp: Slot [58] registered Dec 13 01:54:01.753730 kernel: acpiphp: Slot [59] registered Dec 13 01:54:01.753737 kernel: acpiphp: Slot [60] registered Dec 13 01:54:01.753742 kernel: acpiphp: Slot [61] registered Dec 13 01:54:01.753748 kernel: acpiphp: Slot [62] registered Dec 13 01:54:01.753758 kernel: acpiphp: Slot [63] registered Dec 13 01:54:01.753840 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 13 01:54:01.753909 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:54:01.753958 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:54:01.754006 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:54:01.754054 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Dec 13 01:54:01.754106 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Dec 13 01:54:01.754154 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Dec 13 01:54:01.754203 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Dec 13 01:54:01.754251 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Dec 13 01:54:01.754310 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Dec 13 01:54:01.754534 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Dec 13 01:54:01.754586 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Dec 13 01:54:01.754639 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:54:01.754689 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 01:54:01.754738 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:54:01.754788 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:54:01.754840 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:54:01.754889 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:54:01.754966 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:54:01.757706 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:54:01.757759 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:54:01.757809 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:54:01.757860 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:54:01.757909 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:54:01.757957 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:54:01.758005 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:54:01.758055 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:54:01.758107 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:54:01.758155 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:54:01.758204 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:54:01.758253 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:54:01.758301 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:54:01.758376 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:54:01.758428 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:54:01.758477 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:54:01.758526 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:54:01.758575 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:54:01.758623 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:54:01.758673 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:54:01.758725 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:54:01.758774 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:54:01.758855 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Dec 13 01:54:01.758925 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Dec 13 01:54:01.758975 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Dec 13 01:54:01.759025 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Dec 13 01:54:01.759075 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Dec 13 01:54:01.759124 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Dec 13 01:54:01.759178 kernel: pci 0000:0b:00.0: supports D1 D2 Dec 13 01:54:01.759228 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:01.759279 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 13 01:54:01.759347 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:54:01.759400 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:54:01.759449 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:54:01.759498 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:54:01.759550 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:54:01.759599 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:54:01.759647 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:54:01.759696 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:54:01.759745 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:54:01.759794 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:54:01.759843 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:54:01.759905 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:54:01.759972 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:54:01.760023 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:54:01.760072 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:54:01.760121 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:54:01.760170 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:54:01.760219 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:54:01.760268 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:54:01.760316 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:54:01.760399 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:54:01.760448 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:54:01.760496 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:54:01.760545 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:54:01.760594 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:54:01.760642 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:54:01.760691 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:54:01.760738 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:54:01.760790 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:54:01.760837 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:54:01.760887 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:54:01.760936 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:54:01.760984 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:54:01.761032 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:54:01.761081 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:54:01.761132 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:54:01.761181 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:54:01.761230 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:54:01.761279 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:54:01.761356 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:54:01.761408 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:54:01.761457 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:54:01.761505 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:54:01.761562 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:54:01.761615 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:54:01.761663 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:54:01.761712 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:54:01.761762 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:54:01.761811 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:54:01.761863 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:54:01.761913 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:54:01.761966 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:54:01.762014 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:54:01.762064 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:54:01.762112 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:54:01.762160 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:54:01.762208 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:54:01.762257 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:54:01.762306 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:54:01.762375 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:54:01.762423 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:54:01.762473 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:54:01.762521 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:54:01.762569 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:54:01.762618 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:54:01.762667 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:54:01.762715 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:54:01.762767 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:54:01.762816 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:54:01.762900 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:54:01.762950 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:54:01.762999 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:54:01.763047 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:54:01.763096 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:54:01.763145 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:54:01.763196 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:54:01.763245 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:54:01.763294 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:54:01.763360 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:54:01.763369 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Dec 13 01:54:01.763375 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Dec 13 01:54:01.763381 kernel: ACPI: PCI: Interrupt link LNKB disabled Dec 13 01:54:01.763387 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:54:01.763395 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Dec 13 01:54:01.763401 kernel: iommu: Default domain type: Translated Dec 13 01:54:01.763407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:54:01.763413 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:54:01.763419 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:54:01.763425 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Dec 13 01:54:01.763431 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Dec 13 01:54:01.763481 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Dec 13 01:54:01.763530 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Dec 13 01:54:01.763581 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:54:01.763590 kernel: vgaarb: loaded Dec 13 01:54:01.763596 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Dec 13 01:54:01.763602 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Dec 13 01:54:01.763608 kernel: clocksource: Switched to clocksource tsc-early Dec 13 01:54:01.763614 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:54:01.763620 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:54:01.763625 kernel: pnp: PnP ACPI init Dec 13 01:54:01.763676 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Dec 13 01:54:01.763724 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Dec 13 01:54:01.763768 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Dec 13 01:54:01.763816 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Dec 13 01:54:01.763864 kernel: pnp 00:06: [dma 2] Dec 13 01:54:01.763912 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Dec 13 01:54:01.763957 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Dec 13 01:54:01.764004 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Dec 13 01:54:01.764012 kernel: pnp: PnP ACPI: found 8 devices Dec 13 01:54:01.764018 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:54:01.764024 kernel: NET: Registered PF_INET protocol family Dec 13 01:54:01.764030 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:54:01.764036 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:54:01.764042 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:54:01.764048 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:54:01.764054 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:54:01.764061 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:54:01.764067 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:54:01.764073 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:54:01.764079 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:54:01.764085 kernel: NET: Registered PF_XDP protocol family Dec 13 01:54:01.764134 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 01:54:01.764184 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 01:54:01.764235 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:54:01.764285 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:54:01.764363 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:54:01.764414 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 13 01:54:01.764463 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 13 01:54:01.764533 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 13 01:54:01.764587 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 13 01:54:01.764636 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 13 01:54:01.764684 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 13 01:54:01.764733 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 13 01:54:01.764781 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 13 01:54:01.764834 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 13 01:54:01.764887 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 13 01:54:01.764935 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 13 01:54:01.764984 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 13 01:54:01.765032 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 13 01:54:01.765080 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 13 01:54:01.765129 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 13 01:54:01.765180 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 13 01:54:01.765228 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 13 01:54:01.765278 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Dec 13 01:54:01.765326 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:54:01.765415 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:54:01.765464 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765516 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765565 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765612 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765660 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765708 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765756 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765803 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765877 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.765944 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.765993 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766041 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766089 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766137 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766186 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766234 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766283 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766371 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766423 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766472 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766520 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766568 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766616 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766663 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766711 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766762 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766810 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766857 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.766906 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.766956 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767006 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767054 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767103 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767152 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767203 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767252 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767300 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767378 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767428 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767476 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767525 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767572 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767625 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767672 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767720 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767767 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767814 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767867 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.767916 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.767964 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768012 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768062 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768110 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768158 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768205 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768254 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768302 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768374 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768425 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768474 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768525 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768573 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768622 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768670 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768719 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768768 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768820 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.768909 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.768958 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769006 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769058 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769107 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769155 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769205 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769254 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769302 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769386 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769436 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769485 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769536 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769583 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769631 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769679 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Dec 13 01:54:01.769727 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Dec 13 01:54:01.769777 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 01:54:01.769826 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Dec 13 01:54:01.769874 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 13 01:54:01.769922 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 13 01:54:01.769970 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:54:01.770025 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Dec 13 01:54:01.770075 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 13 01:54:01.770123 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 13 01:54:01.770172 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 13 01:54:01.770220 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:54:01.770269 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 13 01:54:01.770317 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 13 01:54:01.770428 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 13 01:54:01.770481 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:54:01.770531 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 13 01:54:01.770578 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 13 01:54:01.770626 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 13 01:54:01.770674 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:54:01.770722 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 13 01:54:01.770770 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 13 01:54:01.770818 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:54:01.770888 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 13 01:54:01.770940 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 13 01:54:01.772036 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:54:01.772098 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 13 01:54:01.772152 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 13 01:54:01.772204 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:54:01.772255 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 13 01:54:01.772308 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 13 01:54:01.773390 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:54:01.773448 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 13 01:54:01.773500 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 13 01:54:01.773551 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:54:01.773604 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Dec 13 01:54:01.773656 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 13 01:54:01.773706 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 13 01:54:01.773756 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 13 01:54:01.773809 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:54:01.773859 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 13 01:54:01.773909 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 13 01:54:01.773958 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 13 01:54:01.774008 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:54:01.774057 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 13 01:54:01.774106 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 13 01:54:01.774155 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 13 01:54:01.774205 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:54:01.774254 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 13 01:54:01.774306 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 13 01:54:01.774746 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:54:01.774823 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 13 01:54:01.774913 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 13 01:54:01.774963 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:54:01.775012 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 13 01:54:01.775061 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 13 01:54:01.775110 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:54:01.775159 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 13 01:54:01.775212 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 13 01:54:01.775260 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:54:01.775309 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 13 01:54:01.776395 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 13 01:54:01.776453 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:54:01.776525 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 13 01:54:01.776577 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 13 01:54:01.776626 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 13 01:54:01.776675 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:54:01.776726 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 13 01:54:01.776779 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 13 01:54:01.776828 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 13 01:54:01.776877 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:54:01.776928 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 13 01:54:01.776977 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 13 01:54:01.777026 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 13 01:54:01.777077 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:54:01.777126 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 13 01:54:01.777176 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 13 01:54:01.777228 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:54:01.777277 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 13 01:54:01.777333 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 13 01:54:01.777385 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:54:01.777435 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 13 01:54:01.777485 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 13 01:54:01.777535 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:54:01.777584 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 13 01:54:01.777634 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 13 01:54:01.777684 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:54:01.777737 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 13 01:54:01.777785 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 13 01:54:01.777839 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:54:01.777905 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 13 01:54:01.777973 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 13 01:54:01.778022 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 13 01:54:01.778071 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:54:01.778120 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 13 01:54:01.778169 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 13 01:54:01.778221 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 13 01:54:01.778270 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:54:01.778319 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 13 01:54:01.778652 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 13 01:54:01.778705 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:54:01.778757 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 13 01:54:01.778808 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 13 01:54:01.778864 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:54:01.778931 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 13 01:54:01.778980 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 13 01:54:01.779032 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:54:01.779080 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 13 01:54:01.779129 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 13 01:54:01.779178 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:54:01.779227 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 13 01:54:01.779275 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 13 01:54:01.779325 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:54:01.779410 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 13 01:54:01.779459 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 13 01:54:01.779511 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:54:01.779561 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:54:01.779643 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:54:01.779697 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:54:01.779742 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:54:01.779786 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:54:01.779839 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Dec 13 01:54:01.779885 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Dec 13 01:54:01.779972 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 13 01:54:01.780016 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Dec 13 01:54:01.780060 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 13 01:54:01.780104 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 13 01:54:01.780167 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Dec 13 01:54:01.780212 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Dec 13 01:54:01.780262 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Dec 13 01:54:01.780311 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Dec 13 01:54:01.780370 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Dec 13 01:54:01.780420 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Dec 13 01:54:01.780466 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Dec 13 01:54:01.780511 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Dec 13 01:54:01.780560 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Dec 13 01:54:01.780605 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Dec 13 01:54:01.780653 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Dec 13 01:54:01.780702 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Dec 13 01:54:01.780748 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Dec 13 01:54:01.780796 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Dec 13 01:54:01.780842 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 13 01:54:01.780890 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Dec 13 01:54:01.780939 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Dec 13 01:54:01.780988 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Dec 13 01:54:01.781035 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Dec 13 01:54:01.781087 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Dec 13 01:54:01.781141 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Dec 13 01:54:01.781196 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Dec 13 01:54:01.781244 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Dec 13 01:54:01.781289 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Dec 13 01:54:01.781564 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Dec 13 01:54:01.781618 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Dec 13 01:54:01.781665 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Dec 13 01:54:01.781716 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Dec 13 01:54:01.781766 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Dec 13 01:54:01.781814 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Dec 13 01:54:01.781865 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Dec 13 01:54:01.781911 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 13 01:54:01.781960 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Dec 13 01:54:01.782005 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 13 01:54:01.782055 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Dec 13 01:54:01.782103 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Dec 13 01:54:01.782151 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Dec 13 01:54:01.782196 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Dec 13 01:54:01.782245 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Dec 13 01:54:01.782290 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 13 01:54:01.782554 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Dec 13 01:54:01.782609 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Dec 13 01:54:01.782655 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 13 01:54:01.782710 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Dec 13 01:54:01.782756 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Dec 13 01:54:01.782800 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Dec 13 01:54:01.782849 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Dec 13 01:54:01.782894 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Dec 13 01:54:01.782945 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Dec 13 01:54:01.782994 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Dec 13 01:54:01.783040 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 13 01:54:01.783090 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Dec 13 01:54:01.783141 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 13 01:54:01.783197 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Dec 13 01:54:01.783717 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Dec 13 01:54:01.783803 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Dec 13 01:54:01.783856 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Dec 13 01:54:01.783909 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Dec 13 01:54:01.783971 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 13 01:54:01.784039 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 13 01:54:01.784086 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Dec 13 01:54:01.784131 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Dec 13 01:54:01.784182 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Dec 13 01:54:01.784227 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Dec 13 01:54:01.784271 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Dec 13 01:54:01.784319 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Dec 13 01:54:01.784374 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Dec 13 01:54:01.784442 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Dec 13 01:54:01.784488 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 13 01:54:01.784537 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Dec 13 01:54:01.784583 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Dec 13 01:54:01.784632 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Dec 13 01:54:01.784694 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Dec 13 01:54:01.784745 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Dec 13 01:54:01.784791 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Dec 13 01:54:01.784857 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Dec 13 01:54:01.784918 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 13 01:54:01.784972 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:54:01.784981 kernel: PCI: CLS 32 bytes, default 64 Dec 13 01:54:01.784990 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:54:01.784996 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 13 01:54:01.785002 kernel: clocksource: Switched to clocksource tsc Dec 13 01:54:01.785008 kernel: Initialise system trusted keyrings Dec 13 01:54:01.785015 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:54:01.785021 kernel: Key type asymmetric registered Dec 13 01:54:01.785027 kernel: Asymmetric key parser 'x509' registered Dec 13 01:54:01.785033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:54:01.785039 kernel: io scheduler mq-deadline registered Dec 13 01:54:01.785047 kernel: io scheduler kyber registered Dec 13 01:54:01.785053 kernel: io scheduler bfq registered Dec 13 01:54:01.785105 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Dec 13 01:54:01.785155 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785206 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Dec 13 01:54:01.785256 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785306 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Dec 13 01:54:01.785363 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785675 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Dec 13 01:54:01.785747 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785799 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Dec 13 01:54:01.785849 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.785899 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Dec 13 01:54:01.785954 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786007 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Dec 13 01:54:01.786058 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786107 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Dec 13 01:54:01.786156 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786207 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Dec 13 01:54:01.786259 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786308 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Dec 13 01:54:01.786370 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786422 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Dec 13 01:54:01.786471 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786520 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Dec 13 01:54:01.786569 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786622 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Dec 13 01:54:01.786671 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786720 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Dec 13 01:54:01.786769 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786824 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Dec 13 01:54:01.786877 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.786926 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Dec 13 01:54:01.786975 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787025 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Dec 13 01:54:01.787075 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787125 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Dec 13 01:54:01.787176 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787226 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Dec 13 01:54:01.787275 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787324 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Dec 13 01:54:01.787380 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787429 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Dec 13 01:54:01.787482 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787531 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Dec 13 01:54:01.787596 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.787897 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Dec 13 01:54:01.787958 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788011 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Dec 13 01:54:01.788067 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788119 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Dec 13 01:54:01.788169 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788218 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Dec 13 01:54:01.788268 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788318 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Dec 13 01:54:01.788404 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788454 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Dec 13 01:54:01.788503 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788553 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Dec 13 01:54:01.788614 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788669 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Dec 13 01:54:01.788719 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788769 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Dec 13 01:54:01.788823 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788874 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Dec 13 01:54:01.788945 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 13 01:54:01.788957 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:54:01.788963 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:54:01.788970 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:54:01.788977 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Dec 13 01:54:01.788983 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:54:01.788991 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:54:01.789040 kernel: rtc_cmos 00:01: registered as rtc0 Dec 13 01:54:01.789089 kernel: rtc_cmos 00:01: setting system clock to 2024-12-13T01:54:01 UTC (1734054841) Dec 13 01:54:01.789134 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Dec 13 01:54:01.789143 kernel: intel_pstate: CPU model not supported Dec 13 01:54:01.789149 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:54:01.789155 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:54:01.789161 kernel: Segment Routing with IPv6 Dec 13 01:54:01.789167 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:54:01.789173 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:54:01.789180 kernel: Key type dns_resolver registered Dec 13 01:54:01.789188 kernel: IPI shorthand broadcast: enabled Dec 13 01:54:01.789194 kernel: sched_clock: Marking stable (921003688, 224926980)->(1158550255, -12619587) Dec 13 01:54:01.789200 kernel: registered taskstats version 1 Dec 13 01:54:01.789207 kernel: Loading compiled-in X.509 certificates Dec 13 01:54:01.789213 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:54:01.789219 kernel: Key type .fscrypt registered Dec 13 01:54:01.789226 kernel: Key type fscrypt-provisioning registered Dec 13 01:54:01.789232 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:54:01.789239 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:54:01.789245 kernel: ima: No architecture policies found Dec 13 01:54:01.789251 kernel: clk: Disabling unused clocks Dec 13 01:54:01.789258 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:54:01.789264 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:54:01.789270 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:54:01.789276 kernel: Run /init as init process Dec 13 01:54:01.789282 kernel: with arguments: Dec 13 01:54:01.789289 kernel: /init Dec 13 01:54:01.789295 kernel: with environment: Dec 13 01:54:01.789302 kernel: HOME=/ Dec 13 01:54:01.789886 kernel: TERM=linux Dec 13 01:54:01.789895 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:54:01.789904 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:01.789912 systemd[1]: Detected virtualization vmware. Dec 13 01:54:01.789919 systemd[1]: Detected architecture x86-64. Dec 13 01:54:01.789925 systemd[1]: Running in initrd. Dec 13 01:54:01.789931 systemd[1]: No hostname configured, using default hostname. Dec 13 01:54:01.789940 systemd[1]: Hostname set to . Dec 13 01:54:01.789946 systemd[1]: Initializing machine ID from random generator. Dec 13 01:54:01.789953 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:54:01.789959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:01.789966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:01.789973 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:54:01.789980 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:01.789986 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:54:01.789995 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:54:01.790002 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:54:01.790009 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:54:01.790015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:01.790021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:01.790028 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:01.790035 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:01.790042 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:01.790048 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:01.790054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:01.790061 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:01.790067 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:01.790073 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:01.790080 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:01.790086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:01.790094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:01.790101 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:01.790107 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:54:01.790113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:01.790120 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:54:01.790126 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:54:01.790133 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:01.790139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:01.790145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:01.790153 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:01.790172 systemd-journald[215]: Collecting audit messages is disabled. Dec 13 01:54:01.790188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:01.790195 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:54:01.790203 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:01.790210 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:54:01.790216 kernel: Bridge firewalling registered Dec 13 01:54:01.790223 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:01.790231 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:01.790237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:01.790244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:01.790251 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:01.790257 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:01.790264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:01.790270 systemd-journald[215]: Journal started Dec 13 01:54:01.790286 systemd-journald[215]: Runtime Journal (/run/log/journal/0b3c11c25a7f45ec8c36c712cc2d7939) is 4.8M, max 38.6M, 33.8M free. Dec 13 01:54:01.736250 systemd-modules-load[216]: Inserted module 'overlay' Dec 13 01:54:01.757694 systemd-modules-load[216]: Inserted module 'br_netfilter' Dec 13 01:54:01.791500 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:01.791804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:01.792004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:01.796493 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:54:01.798314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:01.802801 dracut-cmdline[245]: dracut-dracut-053 Dec 13 01:54:01.804109 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:54:01.805135 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:01.809438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:01.825289 systemd-resolved[264]: Positive Trust Anchors: Dec 13 01:54:01.825300 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:01.825322 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:01.827452 systemd-resolved[264]: Defaulting to hostname 'linux'. Dec 13 01:54:01.828537 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:01.828670 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:01.848342 kernel: SCSI subsystem initialized Dec 13 01:54:01.854337 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:54:01.860344 kernel: iscsi: registered transport (tcp) Dec 13 01:54:01.873353 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:54:01.873370 kernel: QLogic iSCSI HBA Driver Dec 13 01:54:01.892470 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:01.897561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:54:01.911859 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:54:01.911887 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:54:01.912951 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:54:01.943367 kernel: raid6: avx2x4 gen() 52690 MB/s Dec 13 01:54:01.960340 kernel: raid6: avx2x2 gen() 52602 MB/s Dec 13 01:54:01.977577 kernel: raid6: avx2x1 gen() 45146 MB/s Dec 13 01:54:01.977595 kernel: raid6: using algorithm avx2x4 gen() 52690 MB/s Dec 13 01:54:01.995591 kernel: raid6: .... xor() 21829 MB/s, rmw enabled Dec 13 01:54:01.995629 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:54:02.008341 kernel: xor: automatically using best checksumming function avx Dec 13 01:54:02.106349 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:54:02.111397 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:02.116430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:02.123711 systemd-udevd[432]: Using default interface naming scheme 'v255'. Dec 13 01:54:02.126177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:02.135570 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:54:02.142050 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Dec 13 01:54:02.156763 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:02.161495 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:02.230384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:02.234423 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:54:02.240595 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:02.241082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:02.241625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:02.241943 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:02.242867 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:54:02.253171 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:02.296360 kernel: VMware PVSCSI driver - version 1.0.7.0-k Dec 13 01:54:02.303064 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Dec 13 01:54:02.309341 kernel: vmw_pvscsi: using 64bit dma Dec 13 01:54:02.312978 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Dec 13 01:54:02.323702 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Dec 13 01:54:02.323793 kernel: vmw_pvscsi: max_id: 16 Dec 13 01:54:02.323802 kernel: vmw_pvscsi: setting ring_pages to 8 Dec 13 01:54:02.323810 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:54:02.323817 kernel: vmw_pvscsi: enabling reqCallThreshold Dec 13 01:54:02.323824 kernel: vmw_pvscsi: driver-based request coalescing enabled Dec 13 01:54:02.323832 kernel: vmw_pvscsi: using MSI-X Dec 13 01:54:02.323839 kernel: libata version 3.00 loaded. Dec 13 01:54:02.325851 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Dec 13 01:54:02.325877 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Dec 13 01:54:02.328593 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:02.330531 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Dec 13 01:54:02.330619 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Dec 13 01:54:02.328685 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:02.332861 kernel: ata_piix 0000:00:07.1: version 2.13 Dec 13 01:54:02.341433 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:54:02.341445 kernel: AES CTR mode by8 optimization enabled Dec 13 01:54:02.341453 kernel: scsi host1: ata_piix Dec 13 01:54:02.341528 kernel: scsi host2: ata_piix Dec 13 01:54:02.341589 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Dec 13 01:54:02.341598 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Dec 13 01:54:02.331063 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:02.331166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:02.331238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:02.331381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:02.336512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:02.352721 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:02.356554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:02.364690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:02.510344 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Dec 13 01:54:02.516340 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Dec 13 01:54:02.527432 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Dec 13 01:54:02.534151 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:54:02.534221 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Dec 13 01:54:02.534283 kernel: sd 0:0:0:0: [sda] Cache data unavailable Dec 13 01:54:02.534361 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Dec 13 01:54:02.534422 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Dec 13 01:54:02.542193 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:54:02.542203 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:54:02.542210 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:54:02.542285 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:54:02.597346 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (493) Dec 13 01:54:02.604356 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (485) Dec 13 01:54:02.604925 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Dec 13 01:54:02.607938 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Dec 13 01:54:02.611467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 01:54:02.613675 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Dec 13 01:54:02.613939 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Dec 13 01:54:02.618460 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:54:02.645355 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:54:02.653430 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:54:03.652380 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:54:03.652753 disk-uuid[589]: The operation has completed successfully. Dec 13 01:54:03.689606 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:54:03.689667 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:54:03.694433 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:54:03.696126 sh[605]: Success Dec 13 01:54:03.704345 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:54:03.758905 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:54:03.760415 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:54:03.760790 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:54:03.779920 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:54:03.779952 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:54:03.779963 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:54:03.781381 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:54:03.783066 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:54:03.789343 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:54:03.790314 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:54:03.799540 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Dec 13 01:54:03.800660 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:54:03.825364 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:03.825393 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:54:03.825405 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:54:03.841343 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:54:03.848289 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:54:03.849342 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:03.851853 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:54:03.856297 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:54:03.869535 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:54:03.876508 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:54:03.929049 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:03.933466 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:03.945180 systemd-networkd[795]: lo: Link UP Dec 13 01:54:03.945188 systemd-networkd[795]: lo: Gained carrier Dec 13 01:54:03.945924 systemd-networkd[795]: Enumeration completed Dec 13 01:54:03.946102 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:03.946198 systemd-networkd[795]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Dec 13 01:54:03.946272 systemd[1]: Reached target network.target - Network. Dec 13 01:54:03.948388 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 01:54:03.948514 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 01:54:03.949519 systemd-networkd[795]: ens192: Link UP Dec 13 01:54:03.949526 systemd-networkd[795]: ens192: Gained carrier Dec 13 01:54:04.102623 ignition[666]: Ignition 2.19.0 Dec 13 01:54:04.102630 ignition[666]: Stage: fetch-offline Dec 13 01:54:04.102672 ignition[666]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.102682 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.102744 ignition[666]: parsed url from cmdline: "" Dec 13 01:54:04.102746 ignition[666]: no config URL provided Dec 13 01:54:04.102749 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:54:04.102753 ignition[666]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:54:04.103153 ignition[666]: config successfully fetched Dec 13 01:54:04.103169 ignition[666]: parsing config with SHA512: 48bef6767cada34c8ef0038ef90b37de95817c2f66967f5df83d0a75bbcc506eebea54509a3af115996732a380e42de9e888b96353df975b4aab7f6fdb14c00e Dec 13 01:54:04.105569 unknown[666]: fetched base config from "system" Dec 13 01:54:04.105574 unknown[666]: fetched user config from "vmware" Dec 13 01:54:04.105843 ignition[666]: fetch-offline: fetch-offline passed Dec 13 01:54:04.106500 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:04.105881 ignition[666]: Ignition finished successfully Dec 13 01:54:04.106882 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:54:04.111460 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:54:04.118805 ignition[803]: Ignition 2.19.0 Dec 13 01:54:04.118812 ignition[803]: Stage: kargs Dec 13 01:54:04.118923 ignition[803]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.118929 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.119445 ignition[803]: kargs: kargs passed Dec 13 01:54:04.119472 ignition[803]: Ignition finished successfully Dec 13 01:54:04.120741 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:54:04.127413 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:54:04.134359 ignition[810]: Ignition 2.19.0 Dec 13 01:54:04.134369 ignition[810]: Stage: disks Dec 13 01:54:04.134470 ignition[810]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.134476 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.134996 ignition[810]: disks: disks passed Dec 13 01:54:04.135023 ignition[810]: Ignition finished successfully Dec 13 01:54:04.135818 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:54:04.135988 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:04.136114 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:04.136304 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:04.136495 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:04.136666 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:04.141409 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:54:04.436872 systemd-fsck[818]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:54:04.450829 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:54:04.454405 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:54:04.592993 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:54:04.593388 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:54:04.593357 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:04.597428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:04.598672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:54:04.598942 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:54:04.598967 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:54:04.598981 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:04.602057 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:54:04.606426 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (826) Dec 13 01:54:04.609273 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:04.609291 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:54:04.609299 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:54:04.612919 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:54:04.611946 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:54:04.613873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:04.636759 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:54:04.639075 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:54:04.641166 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:54:04.643599 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:54:04.695082 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:04.699391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:54:04.700785 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:54:04.704455 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:04.715800 ignition[938]: INFO : Ignition 2.19.0 Dec 13 01:54:04.715800 ignition[938]: INFO : Stage: mount Dec 13 01:54:04.715800 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.715800 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.716316 ignition[938]: INFO : mount: mount passed Dec 13 01:54:04.716316 ignition[938]: INFO : Ignition finished successfully Dec 13 01:54:04.716660 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:54:04.720441 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:54:04.721774 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:54:04.777606 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:54:04.782460 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:04.790343 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (950) Dec 13 01:54:04.793288 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:54:04.793308 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:54:04.793319 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:54:04.797344 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:54:04.798671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:04.815449 ignition[967]: INFO : Ignition 2.19.0 Dec 13 01:54:04.815969 ignition[967]: INFO : Stage: files Dec 13 01:54:04.816255 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:04.817250 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:04.817250 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:54:04.818097 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:54:04.818097 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:54:04.820362 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:54:04.820672 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:54:04.821055 unknown[967]: wrote ssh authorized keys file for user: core Dec 13 01:54:04.821370 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:54:04.823244 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:54:04.823244 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:54:04.880943 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:54:04.992381 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:54:04.994311 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:54:05.129649 systemd-networkd[795]: ens192: Gained IPv6LL Dec 13 01:54:05.474576 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:54:05.914163 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:54:05.914163 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:54:05.914971 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:54:05.914971 ignition[967]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:54:05.951514 ignition[967]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:54:05.953722 ignition[967]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:54:05.953722 ignition[967]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:54:05.953722 ignition[967]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:54:05.953722 ignition[967]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:54:05.954256 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:05.954256 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:05.954256 ignition[967]: INFO : files: files passed Dec 13 01:54:05.954256 ignition[967]: INFO : Ignition finished successfully Dec 13 01:54:05.955079 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:54:05.957436 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:54:05.959232 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:54:05.960344 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:54:05.960393 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:54:05.964764 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:05.964764 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:05.965836 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:05.966593 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:05.967046 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:54:05.970438 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:54:05.984153 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:54:05.984213 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:54:05.984515 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:54:05.984635 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:54:05.984848 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:54:05.985287 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:54:05.994002 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:06.000512 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:54:06.006980 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:06.007297 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:06.007497 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:54:06.007648 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:54:06.007729 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:06.008050 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:54:06.008229 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:54:06.008464 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:54:06.008712 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:06.008987 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:06.009226 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:54:06.009520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:06.010005 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:54:06.010286 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:54:06.010552 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:54:06.010751 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:54:06.010820 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:06.011146 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:06.011315 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:06.011523 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:54:06.011568 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:06.011718 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:54:06.011781 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:06.012040 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:54:06.012106 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:06.012392 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:54:06.012561 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:54:06.016346 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:06.016511 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:54:06.016708 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:54:06.016891 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:54:06.016957 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:06.017165 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:54:06.017209 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:06.017414 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:54:06.017493 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:06.017706 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:54:06.017781 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:54:06.022420 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:54:06.022510 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:54:06.022570 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:06.024448 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:54:06.024641 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:54:06.024729 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:06.025002 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:54:06.025079 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:06.027615 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:54:06.027672 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:54:06.032184 ignition[1023]: INFO : Ignition 2.19.0 Dec 13 01:54:06.032184 ignition[1023]: INFO : Stage: umount Dec 13 01:54:06.032184 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:06.032184 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 13 01:54:06.033674 ignition[1023]: INFO : umount: umount passed Dec 13 01:54:06.034143 ignition[1023]: INFO : Ignition finished successfully Dec 13 01:54:06.034385 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:54:06.035088 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:54:06.035526 systemd[1]: Stopped target network.target - Network. Dec 13 01:54:06.035628 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:54:06.035658 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:54:06.035805 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:54:06.035827 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:54:06.035964 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:54:06.035983 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:54:06.036262 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:54:06.036283 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:06.037141 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:54:06.037646 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:54:06.038485 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:54:06.045289 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:54:06.045503 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:54:06.046506 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:54:06.046684 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:54:06.047257 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:54:06.047282 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:06.050403 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:54:06.050503 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:54:06.050529 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:06.050660 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Dec 13 01:54:06.050681 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 13 01:54:06.050804 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:54:06.050825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:06.050937 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:54:06.050958 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:06.051070 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:54:06.051091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:06.051249 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:06.057239 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:54:06.057302 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:54:06.062770 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:54:06.062855 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:06.063139 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:54:06.063166 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:06.063378 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:54:06.063395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:06.063550 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:54:06.063573 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:06.063835 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:54:06.063857 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:06.064071 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:06.064091 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:06.068487 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:54:06.068579 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:54:06.068606 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:06.068720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:06.068741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:06.071472 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:54:06.071535 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:54:06.106298 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:54:06.106402 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:54:06.106891 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:54:06.107049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:54:06.107088 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:06.111439 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:54:06.123414 systemd[1]: Switching root. Dec 13 01:54:06.158125 systemd-journald[215]: Journal stopped Dec 13 01:54:07.120222 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Dec 13 01:54:07.120244 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:54:07.120252 kernel: SELinux: policy capability open_perms=1 Dec 13 01:54:07.120258 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:54:07.120263 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:54:07.120268 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:54:07.120276 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:54:07.120281 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:54:07.120287 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:54:07.120292 kernel: audit: type=1403 audit(1734054846.650:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:54:07.120299 systemd[1]: Successfully loaded SELinux policy in 36.142ms. Dec 13 01:54:07.120306 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.823ms. Dec 13 01:54:07.120313 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:07.120320 systemd[1]: Detected virtualization vmware. Dec 13 01:54:07.120369 systemd[1]: Detected architecture x86-64. Dec 13 01:54:07.120381 systemd[1]: Detected first boot. Dec 13 01:54:07.120388 systemd[1]: Initializing machine ID from random generator. Dec 13 01:54:07.120397 zram_generator::config[1065]: No configuration found. Dec 13 01:54:07.120404 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:54:07.120411 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:54:07.120419 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Dec 13 01:54:07.120425 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:54:07.120432 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:54:07.120438 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:54:07.120447 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:54:07.120454 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:54:07.120461 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:54:07.120467 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:54:07.120474 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:54:07.120481 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:54:07.120487 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:54:07.120495 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:54:07.120502 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:07.120509 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:07.120516 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:54:07.120522 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:54:07.120529 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:54:07.120535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:07.120542 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:54:07.120550 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:07.120558 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:54:07.120566 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:54:07.120573 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:07.120581 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:54:07.120587 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:07.120594 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:07.120601 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:07.120617 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:07.120625 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:54:07.120632 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:54:07.120639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:07.120646 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:07.120654 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:07.120661 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:54:07.120668 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:54:07.120675 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:54:07.120682 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:54:07.120689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:54:07.120696 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:54:07.120703 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:54:07.120711 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:54:07.120719 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:54:07.120726 systemd[1]: Reached target machines.target - Containers. Dec 13 01:54:07.120733 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:54:07.120740 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Dec 13 01:54:07.120747 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:07.120754 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:54:07.120762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:07.120770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:07.120777 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:07.120784 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:54:07.120791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:07.120798 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:54:07.120805 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:54:07.120812 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:54:07.120819 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:54:07.120826 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:54:07.120834 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:07.120842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:07.120849 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:54:07.120856 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:54:07.120863 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:07.120870 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:54:07.120877 systemd[1]: Stopped verity-setup.service. Dec 13 01:54:07.120884 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:54:07.120893 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:54:07.120900 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:54:07.120907 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:54:07.120918 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:54:07.120926 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:54:07.120933 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:54:07.120940 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:07.120947 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:54:07.120955 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:54:07.120963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:07.120971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:07.120978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:07.120985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:07.120992 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:54:07.120999 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:54:07.121006 kernel: fuse: init (API version 7.39) Dec 13 01:54:07.121012 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:54:07.121031 systemd-journald[1148]: Collecting audit messages is disabled. Dec 13 01:54:07.121048 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:54:07.121055 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:07.121062 kernel: loop: module loaded Dec 13 01:54:07.121069 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:07.121076 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:07.121083 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:54:07.121091 systemd-journald[1148]: Journal started Dec 13 01:54:07.121105 systemd-journald[1148]: Runtime Journal (/run/log/journal/ca69e760501c4c85972a7ff275ded8fc) is 4.8M, max 38.6M, 33.8M free. Dec 13 01:54:06.947495 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:54:06.968663 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:54:06.968906 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:54:07.122089 jq[1132]: true Dec 13 01:54:07.123017 jq[1153]: true Dec 13 01:54:07.133341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:54:07.133371 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:54:07.133386 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:54:07.144440 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:07.144478 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:54:07.148907 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:54:07.151343 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:54:07.155185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:07.161272 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:54:07.161301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:07.171460 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:54:07.171516 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:07.199631 kernel: ACPI: bus type drm_connector registered Dec 13 01:54:07.201391 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:07.207489 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:54:07.207519 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:07.209555 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:07.209765 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:07.210159 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:54:07.210553 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:54:07.211036 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:54:07.222742 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:54:07.239403 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:54:07.239821 ignition[1154]: Ignition 2.19.0 Dec 13 01:54:07.240046 ignition[1154]: deleting config from guestinfo properties Dec 13 01:54:07.252086 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:54:07.255562 ignition[1154]: Successfully deleted config Dec 13 01:54:07.260062 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:54:07.260583 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:54:07.261645 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Dec 13 01:54:07.264365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:07.267497 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:54:07.282116 systemd-journald[1148]: Time spent on flushing to /var/log/journal/ca69e760501c4c85972a7ff275ded8fc is 21.439ms for 1841 entries. Dec 13 01:54:07.282116 systemd-journald[1148]: System Journal (/var/log/journal/ca69e760501c4c85972a7ff275ded8fc) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:54:07.351082 systemd-journald[1148]: Received client request to flush runtime journal. Dec 13 01:54:07.351114 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:54:07.314921 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:07.322418 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:54:07.332887 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:54:07.350377 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:54:07.353470 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:54:07.354762 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:54:07.366426 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:54:07.373284 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:54:07.372187 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:07.398351 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 01:54:07.401012 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 13 01:54:07.401023 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 13 01:54:07.407433 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:07.432349 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:54:07.486420 kernel: loop3: detected capacity change from 0 to 2976 Dec 13 01:54:07.521667 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:54:07.542344 kernel: loop5: detected capacity change from 0 to 210664 Dec 13 01:54:07.570345 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 01:54:07.599346 kernel: loop7: detected capacity change from 0 to 2976 Dec 13 01:54:07.617198 (sd-merge)[1233]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Dec 13 01:54:07.618087 (sd-merge)[1233]: Merged extensions into '/usr'. Dec 13 01:54:07.624292 systemd[1]: Reloading requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:54:07.624300 systemd[1]: Reloading... Dec 13 01:54:07.684390 zram_generator::config[1255]: No configuration found. Dec 13 01:54:07.754000 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:54:07.770371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:07.799412 systemd[1]: Reloading finished in 174 ms. Dec 13 01:54:07.800912 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:54:07.823442 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:54:07.823778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:54:07.824023 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:54:07.830600 systemd[1]: Starting ensure-sysext.service... Dec 13 01:54:07.831673 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:07.834443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:07.840904 systemd[1]: Reloading requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:54:07.840913 systemd[1]: Reloading... Dec 13 01:54:07.853097 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:54:07.853320 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:54:07.855554 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:54:07.855728 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 13 01:54:07.855768 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 13 01:54:07.856429 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Dec 13 01:54:07.860836 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:54:07.860841 systemd-tmpfiles[1318]: Skipping /boot Dec 13 01:54:07.873433 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:54:07.873440 systemd-tmpfiles[1318]: Skipping /boot Dec 13 01:54:07.896352 zram_generator::config[1346]: No configuration found. Dec 13 01:54:07.969011 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1350) Dec 13 01:54:07.969060 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1350) Dec 13 01:54:07.975970 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:54:07.981398 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:54:07.994349 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1352) Dec 13 01:54:08.015259 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:54:08.035988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:08.070346 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Dec 13 01:54:08.072307 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:54:08.072377 systemd[1]: Reloading finished in 231 ms. Dec 13 01:54:08.083743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:08.088407 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Dec 13 01:54:08.102887 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:54:08.102903 kernel: Guest personality initialized and is active Dec 13 01:54:08.088610 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:08.103359 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Dec 13 01:54:08.104351 kernel: Initialized host personality Dec 13 01:54:08.107911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 13 01:54:08.108289 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:54:08.110845 (udev-worker)[1352]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Dec 13 01:54:08.115715 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:54:08.118457 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:54:08.120907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:08.122506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:08.124080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:08.124246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:08.126469 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:54:08.129131 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:54:08.136606 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:08.141804 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:08.142343 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:54:08.142897 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:54:08.143043 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:54:08.144041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:08.145363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:08.148621 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:08.148739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:08.154686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:08.154801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:08.156740 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:54:08.157692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:54:08.162557 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:08.165409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:08.165543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:08.167852 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:54:08.169646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:08.169751 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:54:08.171561 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:54:08.172030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:08.172120 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:08.173695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:08.177217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:54:08.185501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:08.189442 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:08.190438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:08.190600 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:08.190641 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:54:08.191355 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:54:08.192577 systemd[1]: Finished ensure-sysext.service. Dec 13 01:54:08.192813 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:54:08.193055 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:08.193131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:08.193399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:08.193469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:08.203486 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:54:08.203620 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:08.205827 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:54:08.209573 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:54:08.209798 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:54:08.210092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:08.210205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:08.210708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:08.220944 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:08.221057 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:08.223696 augenrules[1487]: No rules Dec 13 01:54:08.227498 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:08.227784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:54:08.242387 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:54:08.252627 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:54:08.252842 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:08.260373 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:54:08.267592 lvm[1501]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:08.269350 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:54:08.269562 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:54:08.273973 systemd-networkd[1448]: lo: Link UP Dec 13 01:54:08.273977 systemd-networkd[1448]: lo: Gained carrier Dec 13 01:54:08.275714 systemd-networkd[1448]: Enumeration completed Dec 13 01:54:08.275761 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:08.276897 systemd-networkd[1448]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Dec 13 01:54:08.281006 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 13 01:54:08.281143 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 13 01:54:08.280815 systemd-networkd[1448]: ens192: Link UP Dec 13 01:54:08.280899 systemd-networkd[1448]: ens192: Gained carrier Dec 13 01:54:08.282767 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:54:08.291195 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:54:08.291354 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:54:08.293569 systemd-resolved[1449]: Positive Trust Anchors: Dec 13 01:54:08.293577 systemd-resolved[1449]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:08.293599 systemd-resolved[1449]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:08.295470 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:54:08.296418 systemd-resolved[1449]: Defaulting to hostname 'linux'. Dec 13 01:54:08.297990 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:08.298116 systemd[1]: Reached target network.target - Network. Dec 13 01:54:08.298194 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:08.309017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:08.309229 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:08.309395 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:54:08.309523 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:54:08.309716 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:54:08.309856 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:54:08.309969 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:54:08.310074 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:54:08.310092 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:08.310175 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:08.311146 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:54:08.312081 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:54:08.316352 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:54:08.316735 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:54:08.316879 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:08.316971 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:08.317077 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:54:08.317094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:54:08.317779 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:54:08.320494 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:54:08.321434 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:54:08.322969 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:54:08.323067 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:54:08.326511 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:54:08.328749 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:54:08.330744 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:54:08.334444 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:54:08.337845 extend-filesystems[1515]: Found loop4 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found loop5 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found loop6 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found loop7 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found sda Dec 13 01:54:08.340749 extend-filesystems[1515]: Found sda1 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found sda2 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found sda3 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found usr Dec 13 01:54:08.340749 extend-filesystems[1515]: Found sda4 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found sda6 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found sda7 Dec 13 01:54:08.340749 extend-filesystems[1515]: Found sda9 Dec 13 01:54:08.340749 extend-filesystems[1515]: Checking size of /dev/sda9 Dec 13 01:54:08.340435 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:54:08.351235 jq[1514]: false Dec 13 01:54:08.351410 extend-filesystems[1515]: Old size kept for /dev/sda9 Dec 13 01:54:08.351410 extend-filesystems[1515]: Found sr0 Dec 13 01:54:08.341014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:54:08.341429 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:54:08.342409 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:54:08.355427 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:54:08.356993 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Dec 13 01:54:08.361234 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:54:08.361748 jq[1529]: true Dec 13 01:54:08.361355 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:54:08.361508 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:54:08.361595 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:54:08.367639 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:54:08.367738 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:54:08.385294 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:54:08.386090 update_engine[1524]: I20241213 01:54:08.386045 1524 main.cc:92] Flatcar Update Engine starting Dec 13 01:54:08.390477 tar[1535]: linux-amd64/helm Dec 13 01:54:08.392246 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:54:08.392128 dbus-daemon[1513]: [system] SELinux support is enabled Dec 13 01:54:08.393695 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:54:08.393714 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:54:08.393839 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:54:08.393850 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:55:18.067120 jq[1537]: true Dec 13 01:55:18.066923 systemd-timesyncd[1483]: Contacted time server 23.186.168.1:123 (0.flatcar.pool.ntp.org). Dec 13 01:55:18.066949 systemd-timesyncd[1483]: Initial clock synchronization to Fri 2024-12-13 01:55:18.065436 UTC. Dec 13 01:55:18.067010 systemd-resolved[1449]: Clock change detected. Flushing caches. Dec 13 01:55:18.075173 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Dec 13 01:55:18.079310 update_engine[1524]: I20241213 01:55:18.078372 1524 update_check_scheduler.cc:74] Next update check in 9m41s Dec 13 01:55:18.077657 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Dec 13 01:55:18.078223 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:55:18.085808 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:55:18.086789 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:55:18.086898 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:55:18.106453 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1361) Dec 13 01:55:18.122559 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Dec 13 01:55:18.123538 systemd-logind[1521]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:55:18.125095 systemd-logind[1521]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:55:18.125821 systemd-logind[1521]: New seat seat0. Dec 13 01:55:18.128237 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:55:18.153884 unknown[1554]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Dec 13 01:55:18.159083 unknown[1554]: Core dump limit set to -1 Dec 13 01:55:18.191472 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:55:18.202061 kernel: NET: Registered PF_VSOCK protocol family Dec 13 01:55:18.203958 bash[1574]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:18.206096 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:55:18.207017 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:55:18.394429 containerd[1546]: time="2024-12-13T01:55:18.393653891Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:55:18.406195 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:55:18.426744 containerd[1546]: time="2024-12-13T01:55:18.426716539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428080 containerd[1546]: time="2024-12-13T01:55:18.428061999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428124 containerd[1546]: time="2024-12-13T01:55:18.428116978Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:55:18.428156 containerd[1546]: time="2024-12-13T01:55:18.428149704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:55:18.428277 containerd[1546]: time="2024-12-13T01:55:18.428268747Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:55:18.428421 containerd[1546]: time="2024-12-13T01:55:18.428309784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428421 containerd[1546]: time="2024-12-13T01:55:18.428347760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428421 containerd[1546]: time="2024-12-13T01:55:18.428356079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428536 containerd[1546]: time="2024-12-13T01:55:18.428526158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428570 containerd[1546]: time="2024-12-13T01:55:18.428563427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428600 containerd[1546]: time="2024-12-13T01:55:18.428593483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428630 containerd[1546]: time="2024-12-13T01:55:18.428624288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428700 containerd[1546]: time="2024-12-13T01:55:18.428692270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428841 containerd[1546]: time="2024-12-13T01:55:18.428833087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:18.428979 containerd[1546]: time="2024-12-13T01:55:18.428969009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:18.429010 containerd[1546]: time="2024-12-13T01:55:18.429004187Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:55:18.429168 containerd[1546]: time="2024-12-13T01:55:18.429069175Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:55:18.429168 containerd[1546]: time="2024-12-13T01:55:18.429097509Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432558206Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432587541Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432601396Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432611074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432619204Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432688621Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432845649Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432902974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432912779Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432919956Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432927707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432935541Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432942185Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:55:18.433242 containerd[1546]: time="2024-12-13T01:55:18.432949678Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.432958295Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.432965305Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.432972954Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.432980452Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.432994915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.433003336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.433009933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.433016995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.433024750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.433031632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.433039154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.433046667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.433053508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433477 containerd[1546]: time="2024-12-13T01:55:18.433062061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433068637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433076547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433083092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433094109Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433106534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433113387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433118926Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433145142Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433157758Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433164433Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433170894Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433175976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433211308Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:55:18.433652 containerd[1546]: time="2024-12-13T01:55:18.433251783Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:55:18.433837 containerd[1546]: time="2024-12-13T01:55:18.433262571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:55:18.433852 containerd[1546]: time="2024-12-13T01:55:18.433466986Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:55:18.433852 containerd[1546]: time="2024-12-13T01:55:18.433506698Z" level=info msg="Connect containerd service" Dec 13 01:55:18.433852 containerd[1546]: time="2024-12-13T01:55:18.433527429Z" level=info msg="using legacy CRI server" Dec 13 01:55:18.433852 containerd[1546]: time="2024-12-13T01:55:18.433532385Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:55:18.433852 containerd[1546]: time="2024-12-13T01:55:18.433590630Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:55:18.433979 containerd[1546]: time="2024-12-13T01:55:18.433956796Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:55:18.434110 containerd[1546]: time="2024-12-13T01:55:18.434082382Z" level=info msg="Start subscribing containerd event" Dec 13 01:55:18.434159 containerd[1546]: time="2024-12-13T01:55:18.434151822Z" level=info msg="Start recovering state" Dec 13 01:55:18.434215 containerd[1546]: time="2024-12-13T01:55:18.434208508Z" level=info msg="Start event monitor" Dec 13 01:55:18.434248 containerd[1546]: time="2024-12-13T01:55:18.434242878Z" level=info msg="Start snapshots syncer" Dec 13 01:55:18.434275 containerd[1546]: time="2024-12-13T01:55:18.434269979Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:55:18.434305 containerd[1546]: time="2024-12-13T01:55:18.434299881Z" level=info msg="Start streaming server" Dec 13 01:55:18.434379 containerd[1546]: time="2024-12-13T01:55:18.434112366Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:55:18.434446 containerd[1546]: time="2024-12-13T01:55:18.434438487Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:55:18.434553 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:55:18.435891 containerd[1546]: time="2024-12-13T01:55:18.434997317Z" level=info msg="containerd successfully booted in 0.043721s" Dec 13 01:55:18.442459 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:55:18.448558 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:55:18.455624 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:55:18.455864 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:55:18.462994 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:55:18.468207 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:55:18.470561 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:55:18.472542 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:55:18.472736 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:55:18.557634 tar[1535]: linux-amd64/LICENSE Dec 13 01:55:18.557757 tar[1535]: linux-amd64/README.md Dec 13 01:55:18.565537 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:55:19.343561 systemd-networkd[1448]: ens192: Gained IPv6LL Dec 13 01:55:19.344794 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:55:19.345748 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:55:19.350615 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Dec 13 01:55:19.352118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:19.353647 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:55:19.382307 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:55:19.383225 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:55:19.383348 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Dec 13 01:55:19.383928 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:55:20.078363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:20.078828 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:55:20.078992 systemd[1]: Startup finished in 1.001s (kernel) + 5.022s (initrd) + 3.793s (userspace) = 9.817s. Dec 13 01:55:20.083137 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:20.103675 login[1656]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:55:20.106182 login[1657]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:55:20.109880 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:55:20.115537 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:55:20.118307 systemd-logind[1521]: New session 2 of user core. Dec 13 01:55:20.121946 systemd-logind[1521]: New session 1 of user core. Dec 13 01:55:20.125430 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:55:20.132746 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:55:20.134573 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:20.196930 systemd[1698]: Queued start job for default target default.target. Dec 13 01:55:20.207332 systemd[1698]: Created slice app.slice - User Application Slice. Dec 13 01:55:20.207456 systemd[1698]: Reached target paths.target - Paths. Dec 13 01:55:20.207515 systemd[1698]: Reached target timers.target - Timers. Dec 13 01:55:20.208408 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:55:20.215428 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:55:20.215785 systemd[1698]: Reached target sockets.target - Sockets. Dec 13 01:55:20.215798 systemd[1698]: Reached target basic.target - Basic System. Dec 13 01:55:20.215820 systemd[1698]: Reached target default.target - Main User Target. Dec 13 01:55:20.215836 systemd[1698]: Startup finished in 78ms. Dec 13 01:55:20.216324 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:55:20.217229 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:55:20.217814 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:55:20.986160 kubelet[1691]: E1213 01:55:20.986111 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:20.988058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:20.988173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:31.238366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:55:31.246534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:31.302003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:31.304758 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:31.330137 kubelet[1743]: E1213 01:55:31.330105 1743 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:31.332267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:31.332417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:41.582701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:55:41.593575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:41.688379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:41.691116 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:41.737239 kubelet[1759]: E1213 01:55:41.737203 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:41.738541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:41.738621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:51.880089 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:55:51.887504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:52.226524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:52.229016 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:52.270544 kubelet[1776]: E1213 01:55:52.270511 1776 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:52.271570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:52.271647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:58.278852 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:55:58.286724 systemd[1]: Started sshd@0-139.178.70.106:22-139.178.89.65:37888.service - OpenSSH per-connection server daemon (139.178.89.65:37888). Dec 13 01:55:58.318667 sshd[1786]: Accepted publickey for core from 139.178.89.65 port 37888 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:55:58.319604 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:58.322307 systemd-logind[1521]: New session 3 of user core. Dec 13 01:55:58.328510 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:55:58.380897 systemd[1]: Started sshd@1-139.178.70.106:22-139.178.89.65:37904.service - OpenSSH per-connection server daemon (139.178.89.65:37904). Dec 13 01:55:58.412230 sshd[1791]: Accepted publickey for core from 139.178.89.65 port 37904 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:55:58.413532 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:58.417151 systemd-logind[1521]: New session 4 of user core. Dec 13 01:55:58.423623 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:55:58.473577 sshd[1791]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:58.481144 systemd[1]: sshd@1-139.178.70.106:22-139.178.89.65:37904.service: Deactivated successfully. Dec 13 01:55:58.482324 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:55:58.482841 systemd-logind[1521]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:55:58.487648 systemd[1]: Started sshd@2-139.178.70.106:22-139.178.89.65:37914.service - OpenSSH per-connection server daemon (139.178.89.65:37914). Dec 13 01:55:58.488601 systemd-logind[1521]: Removed session 4. Dec 13 01:55:58.513092 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 37914 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:55:58.514059 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:58.516307 systemd-logind[1521]: New session 5 of user core. Dec 13 01:55:58.525501 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:55:58.572140 sshd[1798]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:58.581031 systemd[1]: sshd@2-139.178.70.106:22-139.178.89.65:37914.service: Deactivated successfully. Dec 13 01:55:58.582582 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:55:58.584338 systemd-logind[1521]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:55:58.588653 systemd[1]: Started sshd@3-139.178.70.106:22-139.178.89.65:37930.service - OpenSSH per-connection server daemon (139.178.89.65:37930). Dec 13 01:55:58.590598 systemd-logind[1521]: Removed session 5. Dec 13 01:55:58.615074 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 37930 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:55:58.615963 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:58.618459 systemd-logind[1521]: New session 6 of user core. Dec 13 01:55:58.622522 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:55:58.671114 sshd[1805]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:58.685686 systemd[1]: sshd@3-139.178.70.106:22-139.178.89.65:37930.service: Deactivated successfully. Dec 13 01:55:58.686471 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:55:58.687242 systemd-logind[1521]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:55:58.687958 systemd[1]: Started sshd@4-139.178.70.106:22-139.178.89.65:37932.service - OpenSSH per-connection server daemon (139.178.89.65:37932). Dec 13 01:55:58.689682 systemd-logind[1521]: Removed session 6. Dec 13 01:55:58.715630 sshd[1812]: Accepted publickey for core from 139.178.89.65 port 37932 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:55:58.716378 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:58.718680 systemd-logind[1521]: New session 7 of user core. Dec 13 01:55:58.726493 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:55:58.782923 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:55:58.783301 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:58.791674 sudo[1815]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:58.793099 sshd[1812]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:58.805867 systemd[1]: sshd@4-139.178.70.106:22-139.178.89.65:37932.service: Deactivated successfully. Dec 13 01:55:58.806731 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:55:58.807525 systemd-logind[1521]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:55:58.808283 systemd[1]: Started sshd@5-139.178.70.106:22-139.178.89.65:37944.service - OpenSSH per-connection server daemon (139.178.89.65:37944). Dec 13 01:55:58.810702 systemd-logind[1521]: Removed session 7. Dec 13 01:55:58.837341 sshd[1820]: Accepted publickey for core from 139.178.89.65 port 37944 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:55:58.838187 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:58.840750 systemd-logind[1521]: New session 8 of user core. Dec 13 01:55:58.847489 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:55:58.894779 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:55:58.894935 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:58.896696 sudo[1824]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:58.899419 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:55:58.899612 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:58.920625 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:58.921413 auditctl[1827]: No rules Dec 13 01:55:58.921666 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:55:58.921773 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:58.923521 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:58.939851 augenrules[1845]: No rules Dec 13 01:55:58.940569 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:58.941313 sudo[1823]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:58.942129 sshd[1820]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:58.953357 systemd[1]: sshd@5-139.178.70.106:22-139.178.89.65:37944.service: Deactivated successfully. Dec 13 01:55:58.954444 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:55:58.954878 systemd-logind[1521]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:55:58.960677 systemd[1]: Started sshd@6-139.178.70.106:22-139.178.89.65:37946.service - OpenSSH per-connection server daemon (139.178.89.65:37946). Dec 13 01:55:58.961740 systemd-logind[1521]: Removed session 8. Dec 13 01:55:58.986152 sshd[1853]: Accepted publickey for core from 139.178.89.65 port 37946 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:55:58.986968 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:58.989382 systemd-logind[1521]: New session 9 of user core. Dec 13 01:55:58.996502 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:55:59.043648 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:55:59.043810 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:59.325533 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:55:59.325610 (dockerd)[1872]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:55:59.577724 dockerd[1872]: time="2024-12-13T01:55:59.577650417Z" level=info msg="Starting up" Dec 13 01:55:59.652052 dockerd[1872]: time="2024-12-13T01:55:59.652031244Z" level=info msg="Loading containers: start." Dec 13 01:55:59.715463 kernel: Initializing XFRM netlink socket Dec 13 01:55:59.761426 systemd-networkd[1448]: docker0: Link UP Dec 13 01:55:59.773312 dockerd[1872]: time="2024-12-13T01:55:59.773243231Z" level=info msg="Loading containers: done." Dec 13 01:55:59.782713 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck501892258-merged.mount: Deactivated successfully. Dec 13 01:55:59.783235 dockerd[1872]: time="2024-12-13T01:55:59.783210062Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:55:59.783300 dockerd[1872]: time="2024-12-13T01:55:59.783281032Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:55:59.783355 dockerd[1872]: time="2024-12-13T01:55:59.783341540Z" level=info msg="Daemon has completed initialization" Dec 13 01:55:59.799012 dockerd[1872]: time="2024-12-13T01:55:59.798976409Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:55:59.799577 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:56:01.051654 containerd[1546]: time="2024-12-13T01:56:01.051628400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:56:01.749022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2502822415.mount: Deactivated successfully. Dec 13 01:56:02.381244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:56:02.388045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:56:02.459617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:56:02.462311 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:56:02.493465 kubelet[2079]: E1213 01:56:02.493362 2079 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:56:02.494951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:56:02.495043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:56:02.933732 containerd[1546]: time="2024-12-13T01:56:02.933458335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:02.934103 containerd[1546]: time="2024-12-13T01:56:02.934085914Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:02.934186 containerd[1546]: time="2024-12-13T01:56:02.934163006Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 01:56:02.935891 containerd[1546]: time="2024-12-13T01:56:02.935856132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:02.936627 containerd[1546]: time="2024-12-13T01:56:02.936524499Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.88487165s" Dec 13 01:56:02.936627 containerd[1546]: time="2024-12-13T01:56:02.936543888Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:56:02.950158 containerd[1546]: time="2024-12-13T01:56:02.950133690Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:56:03.265855 update_engine[1524]: I20241213 01:56:03.265460 1524 update_attempter.cc:509] Updating boot flags... Dec 13 01:56:03.289457 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2101) Dec 13 01:56:03.323404 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2103) Dec 13 01:56:04.175033 containerd[1546]: time="2024-12-13T01:56:04.175005091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:04.175644 containerd[1546]: time="2024-12-13T01:56:04.175602786Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:04.175644 containerd[1546]: time="2024-12-13T01:56:04.175626216Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 01:56:04.177208 containerd[1546]: time="2024-12-13T01:56:04.177178619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:04.177860 containerd[1546]: time="2024-12-13T01:56:04.177787065Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.227629856s" Dec 13 01:56:04.177860 containerd[1546]: time="2024-12-13T01:56:04.177806818Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:56:04.191699 containerd[1546]: time="2024-12-13T01:56:04.191672707Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:56:05.533233 containerd[1546]: time="2024-12-13T01:56:05.532766102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:05.534022 containerd[1546]: time="2024-12-13T01:56:05.534003473Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 01:56:05.534937 containerd[1546]: time="2024-12-13T01:56:05.534917977Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:05.537509 containerd[1546]: time="2024-12-13T01:56:05.537494202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:05.538149 containerd[1546]: time="2024-12-13T01:56:05.537891432Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.346194644s" Dec 13 01:56:05.538420 containerd[1546]: time="2024-12-13T01:56:05.538409481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:56:05.552209 containerd[1546]: time="2024-12-13T01:56:05.552179223Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:56:06.535040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678392761.mount: Deactivated successfully. Dec 13 01:56:06.977377 containerd[1546]: time="2024-12-13T01:56:06.977341832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:06.982605 containerd[1546]: time="2024-12-13T01:56:06.982564535Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 01:56:06.990588 containerd[1546]: time="2024-12-13T01:56:06.990538518Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:06.996760 containerd[1546]: time="2024-12-13T01:56:06.996727212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:06.997308 containerd[1546]: time="2024-12-13T01:56:06.997026967Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.444817663s" Dec 13 01:56:06.997308 containerd[1546]: time="2024-12-13T01:56:06.997048919Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:56:07.010080 containerd[1546]: time="2024-12-13T01:56:07.010054342Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:56:07.807462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201934896.mount: Deactivated successfully. Dec 13 01:56:08.829413 containerd[1546]: time="2024-12-13T01:56:08.828403717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:08.832965 containerd[1546]: time="2024-12-13T01:56:08.832754536Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:56:08.840425 containerd[1546]: time="2024-12-13T01:56:08.840370901Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:08.845535 containerd[1546]: time="2024-12-13T01:56:08.845507167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:08.846219 containerd[1546]: time="2024-12-13T01:56:08.846102739Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.836024208s" Dec 13 01:56:08.846219 containerd[1546]: time="2024-12-13T01:56:08.846125713Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:56:08.859840 containerd[1546]: time="2024-12-13T01:56:08.859813301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:56:09.510116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695431800.mount: Deactivated successfully. Dec 13 01:56:09.511875 containerd[1546]: time="2024-12-13T01:56:09.511841938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:09.512674 containerd[1546]: time="2024-12-13T01:56:09.512598174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:56:09.513469 containerd[1546]: time="2024-12-13T01:56:09.512919145Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:09.514041 containerd[1546]: time="2024-12-13T01:56:09.514019085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:09.514951 containerd[1546]: time="2024-12-13T01:56:09.514558293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 654.72175ms" Dec 13 01:56:09.514951 containerd[1546]: time="2024-12-13T01:56:09.514576176Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:56:09.528539 containerd[1546]: time="2024-12-13T01:56:09.528351930Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:56:10.036985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550876069.mount: Deactivated successfully. Dec 13 01:56:12.435295 containerd[1546]: time="2024-12-13T01:56:12.435261267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:12.435917 containerd[1546]: time="2024-12-13T01:56:12.435893364Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 01:56:12.436416 containerd[1546]: time="2024-12-13T01:56:12.435999498Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:12.437792 containerd[1546]: time="2024-12-13T01:56:12.437773769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:12.438474 containerd[1546]: time="2024-12-13T01:56:12.438460281Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.910087154s" Dec 13 01:56:12.438521 containerd[1546]: time="2024-12-13T01:56:12.438513607Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:56:12.630120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:56:12.638553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:56:13.023382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:56:13.026501 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:56:13.140794 kubelet[2267]: E1213 01:56:13.140765 2267 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:56:13.142223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:56:13.142341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:56:14.492009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:56:14.501632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:56:14.513975 systemd[1]: Reloading requested from client PID 2321 ('systemctl') (unit session-9.scope)... Dec 13 01:56:14.514076 systemd[1]: Reloading... Dec 13 01:56:14.576407 zram_generator::config[2358]: No configuration found. Dec 13 01:56:14.631750 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:56:14.647132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:56:14.690277 systemd[1]: Reloading finished in 175 ms. Dec 13 01:56:14.712116 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:56:14.712166 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:56:14.712329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:56:14.716613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:56:15.011929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:56:15.014765 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:56:15.055676 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:56:15.055676 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:56:15.055676 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:56:15.060668 kubelet[2426]: I1213 01:56:15.060642 2426 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:56:15.538341 kubelet[2426]: I1213 01:56:15.538026 2426 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:56:15.538341 kubelet[2426]: I1213 01:56:15.538052 2426 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:56:15.538341 kubelet[2426]: I1213 01:56:15.538214 2426 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:56:15.760135 kubelet[2426]: I1213 01:56:15.759924 2426 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:56:15.763074 kubelet[2426]: E1213 01:56:15.762062 2426 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:15.771419 kubelet[2426]: I1213 01:56:15.771177 2426 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:56:15.772578 kubelet[2426]: I1213 01:56:15.772339 2426 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:56:15.773941 kubelet[2426]: I1213 01:56:15.772368 2426 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:56:15.774406 kubelet[2426]: I1213 01:56:15.774286 2426 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:56:15.774406 kubelet[2426]: I1213 01:56:15.774302 2426 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:56:15.774486 kubelet[2426]: I1213 01:56:15.774477 2426 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:56:15.775944 kubelet[2426]: I1213 01:56:15.775867 2426 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:56:15.775944 kubelet[2426]: I1213 01:56:15.775883 2426 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:56:15.776531 kubelet[2426]: W1213 01:56:15.776210 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:15.776531 kubelet[2426]: E1213 01:56:15.776252 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:15.776531 kubelet[2426]: I1213 01:56:15.776495 2426 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:56:15.778073 kubelet[2426]: I1213 01:56:15.777998 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:56:15.782872 kubelet[2426]: W1213 01:56:15.782578 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:15.782872 kubelet[2426]: E1213 01:56:15.782606 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:15.782872 kubelet[2426]: I1213 01:56:15.782819 2426 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:56:15.784173 kubelet[2426]: I1213 01:56:15.784061 2426 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:56:15.785792 kubelet[2426]: W1213 01:56:15.785583 2426 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:56:15.786953 kubelet[2426]: I1213 01:56:15.786876 2426 server.go:1264] "Started kubelet" Dec 13 01:56:15.788656 kubelet[2426]: I1213 01:56:15.788603 2426 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:56:15.792796 kubelet[2426]: I1213 01:56:15.792516 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:56:15.792796 kubelet[2426]: I1213 01:56:15.792694 2426 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:56:15.793145 kubelet[2426]: I1213 01:56:15.793132 2426 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:56:15.799246 kubelet[2426]: E1213 01:56:15.799034 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181099d024c64f30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:56:15.786864432 +0000 UTC m=+0.769994150,LastTimestamp:2024-12-13 01:56:15.786864432 +0000 UTC m=+0.769994150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:56:15.799246 kubelet[2426]: I1213 01:56:15.799244 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:56:15.800903 kubelet[2426]: E1213 01:56:15.800888 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:56:15.800940 kubelet[2426]: I1213 01:56:15.800913 2426 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:56:15.807776 kubelet[2426]: I1213 01:56:15.807759 2426 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:56:15.807813 kubelet[2426]: I1213 01:56:15.807796 2426 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:56:15.808000 kubelet[2426]: W1213 01:56:15.807972 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:15.808023 kubelet[2426]: E1213 01:56:15.808004 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:15.812773 kubelet[2426]: E1213 01:56:15.812682 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" Dec 13 01:56:15.815371 kubelet[2426]: I1213 01:56:15.815361 2426 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:56:15.815686 kubelet[2426]: I1213 01:56:15.815474 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:56:15.834573 kubelet[2426]: E1213 01:56:15.834556 2426 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:56:15.835322 kubelet[2426]: I1213 01:56:15.834749 2426 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:56:15.851493 kubelet[2426]: I1213 01:56:15.851471 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:56:15.852418 kubelet[2426]: I1213 01:56:15.852337 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:56:15.852418 kubelet[2426]: I1213 01:56:15.852368 2426 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:56:15.852418 kubelet[2426]: I1213 01:56:15.852384 2426 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:56:15.852504 kubelet[2426]: E1213 01:56:15.852427 2426 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:56:15.859966 kubelet[2426]: W1213 01:56:15.859783 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:15.859966 kubelet[2426]: E1213 01:56:15.859809 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:15.872472 kubelet[2426]: I1213 01:56:15.872426 2426 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:56:15.872472 kubelet[2426]: I1213 01:56:15.872491 2426 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:56:15.872472 kubelet[2426]: I1213 01:56:15.872507 2426 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:56:15.894479 kubelet[2426]: I1213 01:56:15.894462 2426 policy_none.go:49] "None policy: Start" Dec 13 01:56:15.895130 kubelet[2426]: I1213 01:56:15.895104 2426 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:56:15.895173 kubelet[2426]: I1213 01:56:15.895134 2426 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:56:15.902423 kubelet[2426]: I1213 01:56:15.902377 2426 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:56:15.902633 kubelet[2426]: E1213 01:56:15.902620 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Dec 13 01:56:15.933079 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:56:15.939964 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:56:15.942768 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:56:15.951275 kubelet[2426]: I1213 01:56:15.950943 2426 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:56:15.951275 kubelet[2426]: I1213 01:56:15.951065 2426 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:56:15.951275 kubelet[2426]: I1213 01:56:15.951153 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:56:15.953047 kubelet[2426]: I1213 01:56:15.952652 2426 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:56:15.953047 kubelet[2426]: E1213 01:56:15.952880 2426 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:56:15.953375 kubelet[2426]: I1213 01:56:15.953362 2426 topology_manager.go:215] "Topology Admit Handler" podUID="2b7a4d29a8a1420c13deb9a8402266f5" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:56:15.954753 kubelet[2426]: I1213 01:56:15.954020 2426 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:56:15.959858 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 01:56:15.978985 systemd[1]: Created slice kubepods-burstable-pod2b7a4d29a8a1420c13deb9a8402266f5.slice - libcontainer container kubepods-burstable-pod2b7a4d29a8a1420c13deb9a8402266f5.slice. Dec 13 01:56:15.982082 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 01:56:16.013637 kubelet[2426]: E1213 01:56:16.013606 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" Dec 13 01:56:16.103781 kubelet[2426]: I1213 01:56:16.103709 2426 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:56:16.104006 kubelet[2426]: E1213 01:56:16.103919 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Dec 13 01:56:16.109421 kubelet[2426]: I1213 01:56:16.109398 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b7a4d29a8a1420c13deb9a8402266f5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2b7a4d29a8a1420c13deb9a8402266f5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:56:16.109421 kubelet[2426]: I1213 01:56:16.109422 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:16.109511 kubelet[2426]: I1213 01:56:16.109444 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:16.109511 kubelet[2426]: I1213 01:56:16.109455 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:56:16.109511 kubelet[2426]: I1213 01:56:16.109472 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b7a4d29a8a1420c13deb9a8402266f5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2b7a4d29a8a1420c13deb9a8402266f5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:56:16.109511 kubelet[2426]: I1213 01:56:16.109480 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b7a4d29a8a1420c13deb9a8402266f5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2b7a4d29a8a1420c13deb9a8402266f5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:56:16.109511 kubelet[2426]: I1213 01:56:16.109489 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:16.109620 kubelet[2426]: I1213 01:56:16.109498 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:16.109620 kubelet[2426]: I1213 01:56:16.109508 2426 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:16.277932 containerd[1546]: time="2024-12-13T01:56:16.277869020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:16.281411 containerd[1546]: time="2024-12-13T01:56:16.281293034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2b7a4d29a8a1420c13deb9a8402266f5,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:16.284004 containerd[1546]: time="2024-12-13T01:56:16.283983693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:16.415048 kubelet[2426]: E1213 01:56:16.415015 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" Dec 13 01:56:16.504998 kubelet[2426]: I1213 01:56:16.504972 2426 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:56:16.505280 kubelet[2426]: E1213 01:56:16.505257 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Dec 13 01:56:16.614789 kubelet[2426]: W1213 01:56:16.614736 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:16.614789 kubelet[2426]: E1213 01:56:16.614793 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:16.770508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242141713.mount: Deactivated successfully. Dec 13 01:56:16.771474 containerd[1546]: time="2024-12-13T01:56:16.770556285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:56:16.771734 containerd[1546]: time="2024-12-13T01:56:16.771715708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:56:16.772705 containerd[1546]: time="2024-12-13T01:56:16.772687589Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:56:16.773179 containerd[1546]: time="2024-12-13T01:56:16.773160093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:56:16.773467 containerd[1546]: time="2024-12-13T01:56:16.773447938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:56:16.773936 containerd[1546]: time="2024-12-13T01:56:16.773920947Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:56:16.774461 containerd[1546]: time="2024-12-13T01:56:16.774447241Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.520822ms" Dec 13 01:56:16.775972 containerd[1546]: time="2024-12-13T01:56:16.775940035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:56:16.776676 containerd[1546]: time="2024-12-13T01:56:16.776372435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.040084ms" Dec 13 01:56:16.777367 containerd[1546]: time="2024-12-13T01:56:16.777326368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:56:16.778137 containerd[1546]: time="2024-12-13T01:56:16.778119533Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.085678ms" Dec 13 01:56:16.868037 containerd[1546]: time="2024-12-13T01:56:16.867686212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:16.868037 containerd[1546]: time="2024-12-13T01:56:16.867715518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:16.868037 containerd[1546]: time="2024-12-13T01:56:16.867734981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:16.868037 containerd[1546]: time="2024-12-13T01:56:16.867798095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:16.872616 containerd[1546]: time="2024-12-13T01:56:16.872521776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:16.872616 containerd[1546]: time="2024-12-13T01:56:16.872557084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:16.872616 containerd[1546]: time="2024-12-13T01:56:16.872567410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:16.873002 containerd[1546]: time="2024-12-13T01:56:16.872648368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:16.874244 containerd[1546]: time="2024-12-13T01:56:16.874203287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:16.874428 containerd[1546]: time="2024-12-13T01:56:16.874229711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:16.874428 containerd[1546]: time="2024-12-13T01:56:16.874416714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:16.874543 containerd[1546]: time="2024-12-13T01:56:16.874520507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:16.888161 systemd[1]: Started cri-containerd-e2fe457ad7d4df44521cfac271043cf5bb6abd5bcc1f9738ffb15f3e07a2f20a.scope - libcontainer container e2fe457ad7d4df44521cfac271043cf5bb6abd5bcc1f9738ffb15f3e07a2f20a. Dec 13 01:56:16.890941 systemd[1]: Started cri-containerd-009edc67d63ff37a75becb47d443b40919f21106d3f39e1e5fe2530bed42ce24.scope - libcontainer container 009edc67d63ff37a75becb47d443b40919f21106d3f39e1e5fe2530bed42ce24. Dec 13 01:56:16.900530 systemd[1]: Started cri-containerd-9da275ab5dee60047076e425643b361958e31b8e396b9d49813c5555901712de.scope - libcontainer container 9da275ab5dee60047076e425643b361958e31b8e396b9d49813c5555901712de. Dec 13 01:56:16.932649 containerd[1546]: time="2024-12-13T01:56:16.932624217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2b7a4d29a8a1420c13deb9a8402266f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2fe457ad7d4df44521cfac271043cf5bb6abd5bcc1f9738ffb15f3e07a2f20a\"" Dec 13 01:56:16.939906 containerd[1546]: time="2024-12-13T01:56:16.939811972Z" level=info msg="CreateContainer within sandbox \"e2fe457ad7d4df44521cfac271043cf5bb6abd5bcc1f9738ffb15f3e07a2f20a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:56:16.946466 containerd[1546]: time="2024-12-13T01:56:16.946208381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"009edc67d63ff37a75becb47d443b40919f21106d3f39e1e5fe2530bed42ce24\"" Dec 13 01:56:16.948498 containerd[1546]: time="2024-12-13T01:56:16.948481939Z" level=info msg="CreateContainer within sandbox \"009edc67d63ff37a75becb47d443b40919f21106d3f39e1e5fe2530bed42ce24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:56:16.949134 containerd[1546]: time="2024-12-13T01:56:16.949122292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"9da275ab5dee60047076e425643b361958e31b8e396b9d49813c5555901712de\"" Dec 13 01:56:16.950329 containerd[1546]: time="2024-12-13T01:56:16.950317444Z" level=info msg="CreateContainer within sandbox \"9da275ab5dee60047076e425643b361958e31b8e396b9d49813c5555901712de\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:56:16.970720 kubelet[2426]: W1213 01:56:16.970684 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:16.970831 kubelet[2426]: E1213 01:56:16.970822 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:16.989349 containerd[1546]: time="2024-12-13T01:56:16.989314812Z" level=info msg="CreateContainer within sandbox \"e2fe457ad7d4df44521cfac271043cf5bb6abd5bcc1f9738ffb15f3e07a2f20a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aabafb41d9afc3b2d9d14a7b5b6d749261bfe13765680633d660db16f0c00426\"" Dec 13 01:56:16.989722 containerd[1546]: time="2024-12-13T01:56:16.989699281Z" level=info msg="CreateContainer within sandbox \"009edc67d63ff37a75becb47d443b40919f21106d3f39e1e5fe2530bed42ce24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e299dabfe33b14a8e96485a70240ad9b888af33d08e3391ae5ef457d3d48a101\"" Dec 13 01:56:16.990446 containerd[1546]: time="2024-12-13T01:56:16.990023562Z" level=info msg="StartContainer for \"e299dabfe33b14a8e96485a70240ad9b888af33d08e3391ae5ef457d3d48a101\"" Dec 13 01:56:16.990958 containerd[1546]: time="2024-12-13T01:56:16.990946056Z" level=info msg="CreateContainer within sandbox \"9da275ab5dee60047076e425643b361958e31b8e396b9d49813c5555901712de\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf636d60c73aa95f34087cda81215f45c71e6006c08dbeb4afdd368ed460b528\"" Dec 13 01:56:16.991061 containerd[1546]: time="2024-12-13T01:56:16.991051470Z" level=info msg="StartContainer for \"aabafb41d9afc3b2d9d14a7b5b6d749261bfe13765680633d660db16f0c00426\"" Dec 13 01:56:16.997076 containerd[1546]: time="2024-12-13T01:56:16.997051724Z" level=info msg="StartContainer for \"bf636d60c73aa95f34087cda81215f45c71e6006c08dbeb4afdd368ed460b528\"" Dec 13 01:56:17.008490 systemd[1]: Started cri-containerd-aabafb41d9afc3b2d9d14a7b5b6d749261bfe13765680633d660db16f0c00426.scope - libcontainer container aabafb41d9afc3b2d9d14a7b5b6d749261bfe13765680633d660db16f0c00426. Dec 13 01:56:17.012696 systemd[1]: Started cri-containerd-e299dabfe33b14a8e96485a70240ad9b888af33d08e3391ae5ef457d3d48a101.scope - libcontainer container e299dabfe33b14a8e96485a70240ad9b888af33d08e3391ae5ef457d3d48a101. Dec 13 01:56:17.026491 systemd[1]: Started cri-containerd-bf636d60c73aa95f34087cda81215f45c71e6006c08dbeb4afdd368ed460b528.scope - libcontainer container bf636d60c73aa95f34087cda81215f45c71e6006c08dbeb4afdd368ed460b528. Dec 13 01:56:17.045167 containerd[1546]: time="2024-12-13T01:56:17.045147569Z" level=info msg="StartContainer for \"aabafb41d9afc3b2d9d14a7b5b6d749261bfe13765680633d660db16f0c00426\" returns successfully" Dec 13 01:56:17.066743 containerd[1546]: time="2024-12-13T01:56:17.066721546Z" level=info msg="StartContainer for \"bf636d60c73aa95f34087cda81215f45c71e6006c08dbeb4afdd368ed460b528\" returns successfully" Dec 13 01:56:17.070008 containerd[1546]: time="2024-12-13T01:56:17.069991361Z" level=info msg="StartContainer for \"e299dabfe33b14a8e96485a70240ad9b888af33d08e3391ae5ef457d3d48a101\" returns successfully" Dec 13 01:56:17.216037 kubelet[2426]: E1213 01:56:17.215925 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="1.6s" Dec 13 01:56:17.221368 kubelet[2426]: W1213 01:56:17.221326 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:17.221368 kubelet[2426]: E1213 01:56:17.221358 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:17.306872 kubelet[2426]: I1213 01:56:17.306684 2426 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:56:17.306999 kubelet[2426]: E1213 01:56:17.306987 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Dec 13 01:56:17.443007 kubelet[2426]: W1213 01:56:17.442953 2426 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:17.443007 kubelet[2426]: E1213 01:56:17.442994 2426 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Dec 13 01:56:18.880119 kubelet[2426]: E1213 01:56:18.880085 2426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:56:18.908706 kubelet[2426]: I1213 01:56:18.908687 2426 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:56:18.920272 kubelet[2426]: I1213 01:56:18.920251 2426 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:56:18.924733 kubelet[2426]: E1213 01:56:18.924716 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:56:19.025314 kubelet[2426]: E1213 01:56:19.025287 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:56:19.126112 kubelet[2426]: E1213 01:56:19.126083 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:56:19.226896 kubelet[2426]: E1213 01:56:19.226811 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:56:19.327777 kubelet[2426]: E1213 01:56:19.327739 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:56:19.784964 kubelet[2426]: I1213 01:56:19.784937 2426 apiserver.go:52] "Watching apiserver" Dec 13 01:56:19.808160 kubelet[2426]: I1213 01:56:19.808134 2426 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:56:20.553598 systemd[1]: Reloading requested from client PID 2700 ('systemctl') (unit session-9.scope)... Dec 13 01:56:20.553609 systemd[1]: Reloading... Dec 13 01:56:20.602918 zram_generator::config[2740]: No configuration found. Dec 13 01:56:20.669293 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 13 01:56:20.684311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:56:20.734430 systemd[1]: Reloading finished in 180 ms. Dec 13 01:56:20.760281 kubelet[2426]: E1213 01:56:20.760189 2426 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.181099d024c64f30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:56:15.786864432 +0000 UTC m=+0.769994150,LastTimestamp:2024-12-13 01:56:15.786864432 +0000 UTC m=+0.769994150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:56:20.760543 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:56:20.767017 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:56:20.767186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:56:20.771606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:56:20.973608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:56:20.982896 (kubelet)[2805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:56:21.082143 kubelet[2805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:56:21.082143 kubelet[2805]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:56:21.082143 kubelet[2805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:56:21.083136 kubelet[2805]: I1213 01:56:21.083111 2805 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:56:21.085948 kubelet[2805]: I1213 01:56:21.085924 2805 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:56:21.085948 kubelet[2805]: I1213 01:56:21.085940 2805 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:56:21.086074 kubelet[2805]: I1213 01:56:21.086061 2805 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:56:21.086878 kubelet[2805]: I1213 01:56:21.086866 2805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:56:21.087799 kubelet[2805]: I1213 01:56:21.087538 2805 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:56:21.094002 kubelet[2805]: I1213 01:56:21.093987 2805 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:56:21.094497 kubelet[2805]: I1213 01:56:21.094269 2805 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:56:21.094497 kubelet[2805]: I1213 01:56:21.094290 2805 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:56:21.094497 kubelet[2805]: I1213 01:56:21.094403 2805 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:56:21.094497 kubelet[2805]: I1213 01:56:21.094411 2805 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:56:21.094497 kubelet[2805]: I1213 01:56:21.094443 2805 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:56:21.094724 kubelet[2805]: I1213 01:56:21.094717 2805 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:56:21.094765 kubelet[2805]: I1213 01:56:21.094754 2805 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:56:21.095054 kubelet[2805]: I1213 01:56:21.095047 2805 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:56:21.095099 kubelet[2805]: I1213 01:56:21.095094 2805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:56:21.098222 kubelet[2805]: I1213 01:56:21.098207 2805 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:56:21.099553 kubelet[2805]: I1213 01:56:21.098376 2805 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:56:21.099553 kubelet[2805]: I1213 01:56:21.098615 2805 server.go:1264] "Started kubelet" Dec 13 01:56:21.100196 kubelet[2805]: I1213 01:56:21.100186 2805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:56:21.103985 kubelet[2805]: I1213 01:56:21.103961 2805 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:56:21.104648 kubelet[2805]: I1213 01:56:21.104639 2805 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:56:21.105147 kubelet[2805]: I1213 01:56:21.105122 2805 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:56:21.105278 kubelet[2805]: I1213 01:56:21.105271 2805 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:56:21.110717 kubelet[2805]: I1213 01:56:21.110702 2805 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:56:21.111627 kubelet[2805]: I1213 01:56:21.111541 2805 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:56:21.113068 kubelet[2805]: I1213 01:56:21.111720 2805 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:56:21.113068 kubelet[2805]: I1213 01:56:21.112528 2805 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:56:21.113068 kubelet[2805]: I1213 01:56:21.112578 2805 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:56:21.115340 kubelet[2805]: I1213 01:56:21.115320 2805 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:56:21.116525 kubelet[2805]: I1213 01:56:21.116441 2805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:56:21.117962 kubelet[2805]: I1213 01:56:21.117914 2805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:56:21.120461 kubelet[2805]: I1213 01:56:21.120428 2805 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:56:21.120461 kubelet[2805]: I1213 01:56:21.120457 2805 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:56:21.120563 kubelet[2805]: E1213 01:56:21.120498 2805 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:56:21.120685 kubelet[2805]: E1213 01:56:21.120670 2805 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:56:21.149400 kubelet[2805]: I1213 01:56:21.149194 2805 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:56:21.149400 kubelet[2805]: I1213 01:56:21.149204 2805 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:56:21.149400 kubelet[2805]: I1213 01:56:21.149215 2805 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:56:21.149400 kubelet[2805]: I1213 01:56:21.149331 2805 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:56:21.149400 kubelet[2805]: I1213 01:56:21.149339 2805 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:56:21.149400 kubelet[2805]: I1213 01:56:21.149349 2805 policy_none.go:49] "None policy: Start" Dec 13 01:56:21.149859 kubelet[2805]: I1213 01:56:21.149845 2805 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:56:21.149859 kubelet[2805]: I1213 01:56:21.149860 2805 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:56:21.149944 kubelet[2805]: I1213 01:56:21.149932 2805 state_mem.go:75] "Updated machine memory state" Dec 13 01:56:21.152484 kubelet[2805]: I1213 01:56:21.152463 2805 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:56:21.152718 kubelet[2805]: I1213 01:56:21.152563 2805 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:56:21.152718 kubelet[2805]: I1213 01:56:21.152621 2805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:56:21.212588 kubelet[2805]: I1213 01:56:21.212573 2805 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:56:21.221447 kubelet[2805]: I1213 01:56:21.221214 2805 topology_manager.go:215] "Topology Admit Handler" podUID="2b7a4d29a8a1420c13deb9a8402266f5" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:56:21.221447 kubelet[2805]: I1213 01:56:21.221278 2805 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:56:21.221447 kubelet[2805]: I1213 01:56:21.221316 2805 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:56:21.233819 kubelet[2805]: I1213 01:56:21.233745 2805 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:56:21.233998 kubelet[2805]: I1213 01:56:21.233918 2805 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:56:21.255284 kubelet[2805]: E1213 01:56:21.255213 2805 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:21.412995 kubelet[2805]: I1213 01:56:21.412956 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b7a4d29a8a1420c13deb9a8402266f5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2b7a4d29a8a1420c13deb9a8402266f5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:56:21.412995 kubelet[2805]: I1213 01:56:21.412997 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b7a4d29a8a1420c13deb9a8402266f5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2b7a4d29a8a1420c13deb9a8402266f5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:56:21.413186 kubelet[2805]: I1213 01:56:21.413018 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:21.413186 kubelet[2805]: I1213 01:56:21.413035 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:21.413186 kubelet[2805]: I1213 01:56:21.413047 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:21.413186 kubelet[2805]: I1213 01:56:21.413058 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b7a4d29a8a1420c13deb9a8402266f5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2b7a4d29a8a1420c13deb9a8402266f5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:56:21.413186 kubelet[2805]: I1213 01:56:21.413072 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:21.413284 kubelet[2805]: I1213 01:56:21.413085 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:56:21.413284 kubelet[2805]: I1213 01:56:21.413096 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:56:22.100501 kubelet[2805]: I1213 01:56:22.100478 2805 apiserver.go:52] "Watching apiserver" Dec 13 01:56:22.111823 kubelet[2805]: I1213 01:56:22.111781 2805 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:56:22.176412 kubelet[2805]: E1213 01:56:22.175133 2805 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:56:22.216850 kubelet[2805]: I1213 01:56:22.215321 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.215308664 podStartE2EDuration="2.215308664s" podCreationTimestamp="2024-12-13 01:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:22.205365151 +0000 UTC m=+1.174687732" watchObservedRunningTime="2024-12-13 01:56:22.215308664 +0000 UTC m=+1.184631241" Dec 13 01:56:22.223235 kubelet[2805]: I1213 01:56:22.223197 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.223188667 podStartE2EDuration="1.223188667s" podCreationTimestamp="2024-12-13 01:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:22.219778497 +0000 UTC m=+1.189101078" watchObservedRunningTime="2024-12-13 01:56:22.223188667 +0000 UTC m=+1.192511239" Dec 13 01:56:22.235589 kubelet[2805]: I1213 01:56:22.235556 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.23554583 podStartE2EDuration="1.23554583s" podCreationTimestamp="2024-12-13 01:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:22.228834165 +0000 UTC m=+1.198156747" watchObservedRunningTime="2024-12-13 01:56:22.23554583 +0000 UTC m=+1.204868412" Dec 13 01:56:25.437995 sudo[1856]: pam_unix(sudo:session): session closed for user root Dec 13 01:56:25.439935 sshd[1853]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:25.441751 systemd-logind[1521]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:56:25.442347 systemd[1]: sshd@6-139.178.70.106:22-139.178.89.65:37946.service: Deactivated successfully. Dec 13 01:56:25.444102 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:56:25.444273 systemd[1]: session-9.scope: Consumed 2.859s CPU time, 190.8M memory peak, 0B memory swap peak. Dec 13 01:56:25.445716 systemd-logind[1521]: Removed session 9. Dec 13 01:56:36.942111 kubelet[2805]: I1213 01:56:36.942088 2805 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:56:36.947613 containerd[1546]: time="2024-12-13T01:56:36.947581634Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:56:36.947812 kubelet[2805]: I1213 01:56:36.947749 2805 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:56:37.150332 kubelet[2805]: I1213 01:56:37.149058 2805 topology_manager.go:215] "Topology Admit Handler" podUID="51371025-969b-45f8-8500-77d3025cfc19" podNamespace="kube-system" podName="kube-proxy-tck26" Dec 13 01:56:37.156502 systemd[1]: Created slice kubepods-besteffort-pod51371025_969b_45f8_8500_77d3025cfc19.slice - libcontainer container kubepods-besteffort-pod51371025_969b_45f8_8500_77d3025cfc19.slice. Dec 13 01:56:37.201287 kubelet[2805]: I1213 01:56:37.201189 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51371025-969b-45f8-8500-77d3025cfc19-xtables-lock\") pod \"kube-proxy-tck26\" (UID: \"51371025-969b-45f8-8500-77d3025cfc19\") " pod="kube-system/kube-proxy-tck26" Dec 13 01:56:37.201287 kubelet[2805]: I1213 01:56:37.201218 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51371025-969b-45f8-8500-77d3025cfc19-lib-modules\") pod \"kube-proxy-tck26\" (UID: \"51371025-969b-45f8-8500-77d3025cfc19\") " pod="kube-system/kube-proxy-tck26" Dec 13 01:56:37.201287 kubelet[2805]: I1213 01:56:37.201228 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51371025-969b-45f8-8500-77d3025cfc19-kube-proxy\") pod \"kube-proxy-tck26\" (UID: \"51371025-969b-45f8-8500-77d3025cfc19\") " pod="kube-system/kube-proxy-tck26" Dec 13 01:56:37.201287 kubelet[2805]: I1213 01:56:37.201237 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fsx\" (UniqueName: \"kubernetes.io/projected/51371025-969b-45f8-8500-77d3025cfc19-kube-api-access-r5fsx\") pod \"kube-proxy-tck26\" (UID: \"51371025-969b-45f8-8500-77d3025cfc19\") " pod="kube-system/kube-proxy-tck26" Dec 13 01:56:37.462032 containerd[1546]: time="2024-12-13T01:56:37.461973303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tck26,Uid:51371025-969b-45f8-8500-77d3025cfc19,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:37.478255 containerd[1546]: time="2024-12-13T01:56:37.478188922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:37.478255 containerd[1546]: time="2024-12-13T01:56:37.478219782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:37.478255 containerd[1546]: time="2024-12-13T01:56:37.478226737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:37.478492 containerd[1546]: time="2024-12-13T01:56:37.478321529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:37.490829 systemd[1]: run-containerd-runc-k8s.io-8d098a49ae47fc90d6cb0ce0a5abdde76daf7692e4d9b016acf0ae7f53f95eb5-runc.UEDhCI.mount: Deactivated successfully. Dec 13 01:56:37.497505 systemd[1]: Started cri-containerd-8d098a49ae47fc90d6cb0ce0a5abdde76daf7692e4d9b016acf0ae7f53f95eb5.scope - libcontainer container 8d098a49ae47fc90d6cb0ce0a5abdde76daf7692e4d9b016acf0ae7f53f95eb5. Dec 13 01:56:37.510783 containerd[1546]: time="2024-12-13T01:56:37.510748162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tck26,Uid:51371025-969b-45f8-8500-77d3025cfc19,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d098a49ae47fc90d6cb0ce0a5abdde76daf7692e4d9b016acf0ae7f53f95eb5\"" Dec 13 01:56:37.513005 containerd[1546]: time="2024-12-13T01:56:37.512990556Z" level=info msg="CreateContainer within sandbox \"8d098a49ae47fc90d6cb0ce0a5abdde76daf7692e4d9b016acf0ae7f53f95eb5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:56:37.519745 containerd[1546]: time="2024-12-13T01:56:37.519710996Z" level=info msg="CreateContainer within sandbox \"8d098a49ae47fc90d6cb0ce0a5abdde76daf7692e4d9b016acf0ae7f53f95eb5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5e10caa50c7b44caa0a0a1306f66a9c89b7cc0af1cf382d50cad7ff251c3817d\"" Dec 13 01:56:37.520414 containerd[1546]: time="2024-12-13T01:56:37.520159651Z" level=info msg="StartContainer for \"5e10caa50c7b44caa0a0a1306f66a9c89b7cc0af1cf382d50cad7ff251c3817d\"" Dec 13 01:56:37.536509 systemd[1]: Started cri-containerd-5e10caa50c7b44caa0a0a1306f66a9c89b7cc0af1cf382d50cad7ff251c3817d.scope - libcontainer container 5e10caa50c7b44caa0a0a1306f66a9c89b7cc0af1cf382d50cad7ff251c3817d. Dec 13 01:56:37.553500 containerd[1546]: time="2024-12-13T01:56:37.553454781Z" level=info msg="StartContainer for \"5e10caa50c7b44caa0a0a1306f66a9c89b7cc0af1cf382d50cad7ff251c3817d\" returns successfully" Dec 13 01:56:37.958563 kubelet[2805]: I1213 01:56:37.958535 2805 topology_manager.go:215] "Topology Admit Handler" podUID="0ee0dc66-9886-4db6-83d9-3764976a6fee" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-gfh74" Dec 13 01:56:37.968583 systemd[1]: Created slice kubepods-besteffort-pod0ee0dc66_9886_4db6_83d9_3764976a6fee.slice - libcontainer container kubepods-besteffort-pod0ee0dc66_9886_4db6_83d9_3764976a6fee.slice. Dec 13 01:56:38.005936 kubelet[2805]: I1213 01:56:38.005867 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hg4z\" (UniqueName: \"kubernetes.io/projected/0ee0dc66-9886-4db6-83d9-3764976a6fee-kube-api-access-4hg4z\") pod \"tigera-operator-7bc55997bb-gfh74\" (UID: \"0ee0dc66-9886-4db6-83d9-3764976a6fee\") " pod="tigera-operator/tigera-operator-7bc55997bb-gfh74" Dec 13 01:56:38.005936 kubelet[2805]: I1213 01:56:38.005901 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ee0dc66-9886-4db6-83d9-3764976a6fee-var-lib-calico\") pod \"tigera-operator-7bc55997bb-gfh74\" (UID: \"0ee0dc66-9886-4db6-83d9-3764976a6fee\") " pod="tigera-operator/tigera-operator-7bc55997bb-gfh74" Dec 13 01:56:38.275197 containerd[1546]: time="2024-12-13T01:56:38.275038529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gfh74,Uid:0ee0dc66-9886-4db6-83d9-3764976a6fee,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:56:38.287841 containerd[1546]: time="2024-12-13T01:56:38.287725697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:38.287841 containerd[1546]: time="2024-12-13T01:56:38.287755930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:38.287841 containerd[1546]: time="2024-12-13T01:56:38.287763003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:38.287841 containerd[1546]: time="2024-12-13T01:56:38.287811223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:38.301499 systemd[1]: Started cri-containerd-259ee987127b5eef98cbcf6f18fda6e647e32967be195b55e995a93eb01cdeaa.scope - libcontainer container 259ee987127b5eef98cbcf6f18fda6e647e32967be195b55e995a93eb01cdeaa. Dec 13 01:56:38.329742 containerd[1546]: time="2024-12-13T01:56:38.329693972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gfh74,Uid:0ee0dc66-9886-4db6-83d9-3764976a6fee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"259ee987127b5eef98cbcf6f18fda6e647e32967be195b55e995a93eb01cdeaa\"" Dec 13 01:56:38.331149 containerd[1546]: time="2024-12-13T01:56:38.330948880Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:56:39.857963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1915598939.mount: Deactivated successfully. Dec 13 01:56:40.289006 containerd[1546]: time="2024-12-13T01:56:40.288432774Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:40.289815 containerd[1546]: time="2024-12-13T01:56:40.289722128Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763693" Dec 13 01:56:40.290509 containerd[1546]: time="2024-12-13T01:56:40.290185968Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:40.297937 containerd[1546]: time="2024-12-13T01:56:40.297921605Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:40.299164 containerd[1546]: time="2024-12-13T01:56:40.299150658Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.968181219s" Dec 13 01:56:40.299234 containerd[1546]: time="2024-12-13T01:56:40.299223812Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:56:40.300717 containerd[1546]: time="2024-12-13T01:56:40.300700046Z" level=info msg="CreateContainer within sandbox \"259ee987127b5eef98cbcf6f18fda6e647e32967be195b55e995a93eb01cdeaa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:56:40.306404 containerd[1546]: time="2024-12-13T01:56:40.306350263Z" level=info msg="CreateContainer within sandbox \"259ee987127b5eef98cbcf6f18fda6e647e32967be195b55e995a93eb01cdeaa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"18dba0cec0737fac517f8d7dd6d56a30aba1cf54310501b2875cd740655663ad\"" Dec 13 01:56:40.307455 containerd[1546]: time="2024-12-13T01:56:40.307439394Z" level=info msg="StartContainer for \"18dba0cec0737fac517f8d7dd6d56a30aba1cf54310501b2875cd740655663ad\"" Dec 13 01:56:40.308460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075926318.mount: Deactivated successfully. Dec 13 01:56:40.327482 systemd[1]: Started cri-containerd-18dba0cec0737fac517f8d7dd6d56a30aba1cf54310501b2875cd740655663ad.scope - libcontainer container 18dba0cec0737fac517f8d7dd6d56a30aba1cf54310501b2875cd740655663ad. Dec 13 01:56:40.342317 containerd[1546]: time="2024-12-13T01:56:40.342272001Z" level=info msg="StartContainer for \"18dba0cec0737fac517f8d7dd6d56a30aba1cf54310501b2875cd740655663ad\" returns successfully" Dec 13 01:56:41.184103 kubelet[2805]: I1213 01:56:41.182140 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tck26" podStartSLOduration=4.182130023 podStartE2EDuration="4.182130023s" podCreationTimestamp="2024-12-13 01:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:38.172571882 +0000 UTC m=+17.141894459" watchObservedRunningTime="2024-12-13 01:56:41.182130023 +0000 UTC m=+20.151452619" Dec 13 01:56:43.482522 kubelet[2805]: I1213 01:56:43.482481 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-gfh74" podStartSLOduration=4.513241408 podStartE2EDuration="6.482461272s" podCreationTimestamp="2024-12-13 01:56:37 +0000 UTC" firstStartedPulling="2024-12-13 01:56:38.330457103 +0000 UTC m=+17.299779673" lastFinishedPulling="2024-12-13 01:56:40.299676963 +0000 UTC m=+19.268999537" observedRunningTime="2024-12-13 01:56:41.182042962 +0000 UTC m=+20.151365542" watchObservedRunningTime="2024-12-13 01:56:43.482461272 +0000 UTC m=+22.451783848" Dec 13 01:56:43.482835 kubelet[2805]: I1213 01:56:43.482580 2805 topology_manager.go:215] "Topology Admit Handler" podUID="71070674-5131-42a1-ae6d-0c57ac8f1dad" podNamespace="calico-system" podName="calico-typha-65cf87b9c9-pgwcx" Dec 13 01:56:43.490306 systemd[1]: Created slice kubepods-besteffort-pod71070674_5131_42a1_ae6d_0c57ac8f1dad.slice - libcontainer container kubepods-besteffort-pod71070674_5131_42a1_ae6d_0c57ac8f1dad.slice. Dec 13 01:56:43.528340 kubelet[2805]: I1213 01:56:43.528315 2805 topology_manager.go:215] "Topology Admit Handler" podUID="e10f5629-676a-46ef-8679-29a4870409ab" podNamespace="calico-system" podName="calico-node-6f9sx" Dec 13 01:56:43.536039 systemd[1]: Created slice kubepods-besteffort-pode10f5629_676a_46ef_8679_29a4870409ab.slice - libcontainer container kubepods-besteffort-pode10f5629_676a_46ef_8679_29a4870409ab.slice. Dec 13 01:56:43.633911 kubelet[2805]: I1213 01:56:43.633883 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e10f5629-676a-46ef-8679-29a4870409ab-lib-modules\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.633911 kubelet[2805]: I1213 01:56:43.633910 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e10f5629-676a-46ef-8679-29a4870409ab-policysync\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634019 kubelet[2805]: I1213 01:56:43.633922 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e10f5629-676a-46ef-8679-29a4870409ab-flexvol-driver-host\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634019 kubelet[2805]: I1213 01:56:43.633933 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w66kp\" (UniqueName: \"kubernetes.io/projected/e10f5629-676a-46ef-8679-29a4870409ab-kube-api-access-w66kp\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634019 kubelet[2805]: I1213 01:56:43.633942 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71070674-5131-42a1-ae6d-0c57ac8f1dad-tigera-ca-bundle\") pod \"calico-typha-65cf87b9c9-pgwcx\" (UID: \"71070674-5131-42a1-ae6d-0c57ac8f1dad\") " pod="calico-system/calico-typha-65cf87b9c9-pgwcx" Dec 13 01:56:43.634019 kubelet[2805]: I1213 01:56:43.633951 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/71070674-5131-42a1-ae6d-0c57ac8f1dad-typha-certs\") pod \"calico-typha-65cf87b9c9-pgwcx\" (UID: \"71070674-5131-42a1-ae6d-0c57ac8f1dad\") " pod="calico-system/calico-typha-65cf87b9c9-pgwcx" Dec 13 01:56:43.634019 kubelet[2805]: I1213 01:56:43.633960 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e10f5629-676a-46ef-8679-29a4870409ab-var-run-calico\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634107 kubelet[2805]: I1213 01:56:43.633969 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e10f5629-676a-46ef-8679-29a4870409ab-var-lib-calico\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634107 kubelet[2805]: I1213 01:56:43.633977 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e10f5629-676a-46ef-8679-29a4870409ab-cni-net-dir\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634107 kubelet[2805]: I1213 01:56:43.633987 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e10f5629-676a-46ef-8679-29a4870409ab-tigera-ca-bundle\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634107 kubelet[2805]: I1213 01:56:43.633997 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e10f5629-676a-46ef-8679-29a4870409ab-cni-bin-dir\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634107 kubelet[2805]: I1213 01:56:43.634005 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e10f5629-676a-46ef-8679-29a4870409ab-cni-log-dir\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634208 kubelet[2805]: I1213 01:56:43.634014 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzhr5\" (UniqueName: \"kubernetes.io/projected/71070674-5131-42a1-ae6d-0c57ac8f1dad-kube-api-access-jzhr5\") pod \"calico-typha-65cf87b9c9-pgwcx\" (UID: \"71070674-5131-42a1-ae6d-0c57ac8f1dad\") " pod="calico-system/calico-typha-65cf87b9c9-pgwcx" Dec 13 01:56:43.634208 kubelet[2805]: I1213 01:56:43.634022 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e10f5629-676a-46ef-8679-29a4870409ab-xtables-lock\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.634208 kubelet[2805]: I1213 01:56:43.634033 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e10f5629-676a-46ef-8679-29a4870409ab-node-certs\") pod \"calico-node-6f9sx\" (UID: \"e10f5629-676a-46ef-8679-29a4870409ab\") " pod="calico-system/calico-node-6f9sx" Dec 13 01:56:43.651508 kubelet[2805]: I1213 01:56:43.651462 2805 topology_manager.go:215] "Topology Admit Handler" podUID="45fb1911-1ccb-4174-8fae-ff2967d97276" podNamespace="calico-system" podName="csi-node-driver-pdngg" Dec 13 01:56:43.652030 kubelet[2805]: E1213 01:56:43.651875 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdngg" podUID="45fb1911-1ccb-4174-8fae-ff2967d97276" Dec 13 01:56:43.740102 kubelet[2805]: E1213 01:56:43.739418 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.740102 kubelet[2805]: W1213 01:56:43.739436 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.740526 kubelet[2805]: E1213 01:56:43.740494 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.745495 kubelet[2805]: E1213 01:56:43.745485 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.745592 kubelet[2805]: W1213 01:56:43.745584 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.745681 kubelet[2805]: E1213 01:56:43.745673 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.748001 kubelet[2805]: E1213 01:56:43.746609 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.748001 kubelet[2805]: W1213 01:56:43.746617 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.748341 kubelet[2805]: E1213 01:56:43.748017 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.752190 kubelet[2805]: E1213 01:56:43.752084 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.752190 kubelet[2805]: W1213 01:56:43.752092 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.753331 kubelet[2805]: E1213 01:56:43.752673 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.795772 containerd[1546]: time="2024-12-13T01:56:43.795747915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65cf87b9c9-pgwcx,Uid:71070674-5131-42a1-ae6d-0c57ac8f1dad,Namespace:calico-system,Attempt:0,}" Dec 13 01:56:43.809411 containerd[1546]: time="2024-12-13T01:56:43.809335129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:43.809706 containerd[1546]: time="2024-12-13T01:56:43.809650178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:43.809706 containerd[1546]: time="2024-12-13T01:56:43.809664518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:43.810474 containerd[1546]: time="2024-12-13T01:56:43.809731094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:43.833495 systemd[1]: Started cri-containerd-9d9799f04472142620277a1999004edffc6027bacff3af654dbbf4ca1929aeea.scope - libcontainer container 9d9799f04472142620277a1999004edffc6027bacff3af654dbbf4ca1929aeea. Dec 13 01:56:43.835407 kubelet[2805]: E1213 01:56:43.835380 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.835450 kubelet[2805]: W1213 01:56:43.835408 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.835450 kubelet[2805]: E1213 01:56:43.835423 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.835450 kubelet[2805]: I1213 01:56:43.835440 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/45fb1911-1ccb-4174-8fae-ff2967d97276-socket-dir\") pod \"csi-node-driver-pdngg\" (UID: \"45fb1911-1ccb-4174-8fae-ff2967d97276\") " pod="calico-system/csi-node-driver-pdngg" Dec 13 01:56:43.835740 kubelet[2805]: E1213 01:56:43.835728 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.835740 kubelet[2805]: W1213 01:56:43.835736 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.835798 kubelet[2805]: E1213 01:56:43.835746 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.835798 kubelet[2805]: I1213 01:56:43.835756 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpl69\" (UniqueName: \"kubernetes.io/projected/45fb1911-1ccb-4174-8fae-ff2967d97276-kube-api-access-gpl69\") pod \"csi-node-driver-pdngg\" (UID: \"45fb1911-1ccb-4174-8fae-ff2967d97276\") " pod="calico-system/csi-node-driver-pdngg" Dec 13 01:56:43.836091 kubelet[2805]: E1213 01:56:43.836078 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.836091 kubelet[2805]: W1213 01:56:43.836087 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.836140 kubelet[2805]: E1213 01:56:43.836096 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.836140 kubelet[2805]: I1213 01:56:43.836106 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/45fb1911-1ccb-4174-8fae-ff2967d97276-varrun\") pod \"csi-node-driver-pdngg\" (UID: \"45fb1911-1ccb-4174-8fae-ff2967d97276\") " pod="calico-system/csi-node-driver-pdngg" Dec 13 01:56:43.836326 kubelet[2805]: E1213 01:56:43.836258 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.836326 kubelet[2805]: W1213 01:56:43.836265 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.836326 kubelet[2805]: E1213 01:56:43.836271 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.836583 kubelet[2805]: I1213 01:56:43.836361 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fb1911-1ccb-4174-8fae-ff2967d97276-kubelet-dir\") pod \"csi-node-driver-pdngg\" (UID: \"45fb1911-1ccb-4174-8fae-ff2967d97276\") " pod="calico-system/csi-node-driver-pdngg" Dec 13 01:56:43.836583 kubelet[2805]: E1213 01:56:43.836468 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.836583 kubelet[2805]: W1213 01:56:43.836474 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.836646 kubelet[2805]: E1213 01:56:43.836484 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.836885 kubelet[2805]: E1213 01:56:43.836862 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.836949 kubelet[2805]: W1213 01:56:43.836890 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.836949 kubelet[2805]: E1213 01:56:43.836902 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.837075 kubelet[2805]: E1213 01:56:43.837013 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.837075 kubelet[2805]: W1213 01:56:43.837017 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.837075 kubelet[2805]: E1213 01:56:43.837025 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.837075 kubelet[2805]: I1213 01:56:43.837043 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/45fb1911-1ccb-4174-8fae-ff2967d97276-registration-dir\") pod \"csi-node-driver-pdngg\" (UID: \"45fb1911-1ccb-4174-8fae-ff2967d97276\") " pod="calico-system/csi-node-driver-pdngg" Dec 13 01:56:43.837147 kubelet[2805]: E1213 01:56:43.837141 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.837164 kubelet[2805]: W1213 01:56:43.837148 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.837164 kubelet[2805]: E1213 01:56:43.837153 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.837259 kubelet[2805]: E1213 01:56:43.837245 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.837259 kubelet[2805]: W1213 01:56:43.837251 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.837259 kubelet[2805]: E1213 01:56:43.837257 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.837365 kubelet[2805]: E1213 01:56:43.837356 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.837365 kubelet[2805]: W1213 01:56:43.837361 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.837365 kubelet[2805]: E1213 01:56:43.837366 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.837529 kubelet[2805]: E1213 01:56:43.837526 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.837552 kubelet[2805]: W1213 01:56:43.837530 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.837552 kubelet[2805]: E1213 01:56:43.837540 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.837657 kubelet[2805]: E1213 01:56:43.837647 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.837657 kubelet[2805]: W1213 01:56:43.837653 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.837657 kubelet[2805]: E1213 01:56:43.837658 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.838057 kubelet[2805]: E1213 01:56:43.837751 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.838057 kubelet[2805]: W1213 01:56:43.837756 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.838057 kubelet[2805]: E1213 01:56:43.837760 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.838057 kubelet[2805]: E1213 01:56:43.837871 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.838057 kubelet[2805]: W1213 01:56:43.837875 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.838057 kubelet[2805]: E1213 01:56:43.837881 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.838057 kubelet[2805]: E1213 01:56:43.837991 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.838057 kubelet[2805]: W1213 01:56:43.838001 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.838057 kubelet[2805]: E1213 01:56:43.838005 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.841613 containerd[1546]: time="2024-12-13T01:56:43.841517859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6f9sx,Uid:e10f5629-676a-46ef-8679-29a4870409ab,Namespace:calico-system,Attempt:0,}" Dec 13 01:56:43.856437 containerd[1546]: time="2024-12-13T01:56:43.855694595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:43.856437 containerd[1546]: time="2024-12-13T01:56:43.855737200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:43.856437 containerd[1546]: time="2024-12-13T01:56:43.855747437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:43.856437 containerd[1546]: time="2024-12-13T01:56:43.855800197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:43.869506 systemd[1]: Started cri-containerd-25376105af5301b70e33c7985d65e46f6d3f63704f528b674b8cd2d9c1011e58.scope - libcontainer container 25376105af5301b70e33c7985d65e46f6d3f63704f528b674b8cd2d9c1011e58. Dec 13 01:56:43.873369 containerd[1546]: time="2024-12-13T01:56:43.873347097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65cf87b9c9-pgwcx,Uid:71070674-5131-42a1-ae6d-0c57ac8f1dad,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d9799f04472142620277a1999004edffc6027bacff3af654dbbf4ca1929aeea\"" Dec 13 01:56:43.874641 containerd[1546]: time="2024-12-13T01:56:43.874580052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:56:43.887499 containerd[1546]: time="2024-12-13T01:56:43.887454736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6f9sx,Uid:e10f5629-676a-46ef-8679-29a4870409ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"25376105af5301b70e33c7985d65e46f6d3f63704f528b674b8cd2d9c1011e58\"" Dec 13 01:56:43.938603 kubelet[2805]: E1213 01:56:43.938525 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.938603 kubelet[2805]: W1213 01:56:43.938541 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.938603 kubelet[2805]: E1213 01:56:43.938554 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.938822 kubelet[2805]: E1213 01:56:43.938705 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.938822 kubelet[2805]: W1213 01:56:43.938710 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.938822 kubelet[2805]: E1213 01:56:43.938718 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.938973 kubelet[2805]: E1213 01:56:43.938907 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.938973 kubelet[2805]: W1213 01:56:43.938915 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.938973 kubelet[2805]: E1213 01:56:43.938927 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.939133 kubelet[2805]: E1213 01:56:43.939078 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.939133 kubelet[2805]: W1213 01:56:43.939083 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.939133 kubelet[2805]: E1213 01:56:43.939092 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.939210 kubelet[2805]: E1213 01:56:43.939205 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.939240 kubelet[2805]: W1213 01:56:43.939236 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.939297 kubelet[2805]: E1213 01:56:43.939271 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.939384 kubelet[2805]: E1213 01:56:43.939372 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.939384 kubelet[2805]: W1213 01:56:43.939381 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.939446 kubelet[2805]: E1213 01:56:43.939405 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.939550 kubelet[2805]: E1213 01:56:43.939519 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.939550 kubelet[2805]: W1213 01:56:43.939527 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.939591 kubelet[2805]: E1213 01:56:43.939532 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.939659 kubelet[2805]: E1213 01:56:43.939648 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.939659 kubelet[2805]: W1213 01:56:43.939657 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.939703 kubelet[2805]: E1213 01:56:43.939664 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.939779 kubelet[2805]: E1213 01:56:43.939771 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.939779 kubelet[2805]: W1213 01:56:43.939778 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.939822 kubelet[2805]: E1213 01:56:43.939785 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.939882 kubelet[2805]: E1213 01:56:43.939872 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.939909 kubelet[2805]: W1213 01:56:43.939893 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.939944 kubelet[2805]: E1213 01:56:43.939934 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.939996 kubelet[2805]: E1213 01:56:43.939988 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.939996 kubelet[2805]: W1213 01:56:43.939994 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.940040 kubelet[2805]: E1213 01:56:43.940029 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.940102 kubelet[2805]: E1213 01:56:43.940094 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.940102 kubelet[2805]: W1213 01:56:43.940100 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.940143 kubelet[2805]: E1213 01:56:43.940108 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.940232 kubelet[2805]: E1213 01:56:43.940222 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.940232 kubelet[2805]: W1213 01:56:43.940229 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.940273 kubelet[2805]: E1213 01:56:43.940236 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.940330 kubelet[2805]: E1213 01:56:43.940319 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.940330 kubelet[2805]: W1213 01:56:43.940326 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.940375 kubelet[2805]: E1213 01:56:43.940331 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.940443 kubelet[2805]: E1213 01:56:43.940435 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.940443 kubelet[2805]: W1213 01:56:43.940442 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.940535 kubelet[2805]: E1213 01:56:43.940451 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.941106 kubelet[2805]: E1213 01:56:43.941094 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.941106 kubelet[2805]: W1213 01:56:43.941104 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.941145 kubelet[2805]: E1213 01:56:43.941112 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.941227 kubelet[2805]: E1213 01:56:43.941211 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.941227 kubelet[2805]: W1213 01:56:43.941217 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.941227 kubelet[2805]: E1213 01:56:43.941223 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.941654 kubelet[2805]: E1213 01:56:43.941315 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.941654 kubelet[2805]: W1213 01:56:43.941322 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.941654 kubelet[2805]: E1213 01:56:43.941328 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.943027 kubelet[2805]: E1213 01:56:43.943015 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.943027 kubelet[2805]: W1213 01:56:43.943025 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.943101 kubelet[2805]: E1213 01:56:43.943040 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.943168 kubelet[2805]: E1213 01:56:43.943160 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.943188 kubelet[2805]: W1213 01:56:43.943168 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.943188 kubelet[2805]: E1213 01:56:43.943178 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.943299 kubelet[2805]: E1213 01:56:43.943290 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.943299 kubelet[2805]: W1213 01:56:43.943296 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.943450 kubelet[2805]: E1213 01:56:43.943432 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.943450 kubelet[2805]: E1213 01:56:43.943442 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.943450 kubelet[2805]: W1213 01:56:43.943447 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.944088 kubelet[2805]: E1213 01:56:43.943550 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.944088 kubelet[2805]: W1213 01:56:43.943555 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.944088 kubelet[2805]: E1213 01:56:43.943560 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.944088 kubelet[2805]: E1213 01:56:43.943573 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.944088 kubelet[2805]: E1213 01:56:43.943667 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.944088 kubelet[2805]: W1213 01:56:43.943671 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.944088 kubelet[2805]: E1213 01:56:43.943677 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.944088 kubelet[2805]: E1213 01:56:43.943773 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.944088 kubelet[2805]: W1213 01:56:43.943778 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.944088 kubelet[2805]: E1213 01:56:43.943783 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:43.945970 kubelet[2805]: E1213 01:56:43.945958 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:43.945970 kubelet[2805]: W1213 01:56:43.945968 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:43.946032 kubelet[2805]: E1213 01:56:43.945975 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:45.158269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974098336.mount: Deactivated successfully. Dec 13 01:56:45.650002 containerd[1546]: time="2024-12-13T01:56:45.649909575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:45.651459 containerd[1546]: time="2024-12-13T01:56:45.651409846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:56:45.653413 containerd[1546]: time="2024-12-13T01:56:45.653205728Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:45.656850 containerd[1546]: time="2024-12-13T01:56:45.656809924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:45.657165 containerd[1546]: time="2024-12-13T01:56:45.657147966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.782094572s" Dec 13 01:56:45.657196 containerd[1546]: time="2024-12-13T01:56:45.657167692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:56:45.658702 containerd[1546]: time="2024-12-13T01:56:45.658527706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:56:45.666594 containerd[1546]: time="2024-12-13T01:56:45.666571226Z" level=info msg="CreateContainer within sandbox \"9d9799f04472142620277a1999004edffc6027bacff3af654dbbf4ca1929aeea\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:56:45.670701 containerd[1546]: time="2024-12-13T01:56:45.670675747Z" level=info msg="CreateContainer within sandbox \"9d9799f04472142620277a1999004edffc6027bacff3af654dbbf4ca1929aeea\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2dedc833d357b462be1cb1c4071c0c5e02374fef6b47fdd4ef1463f0b2e0f397\"" Dec 13 01:56:45.671254 containerd[1546]: time="2024-12-13T01:56:45.671233638Z" level=info msg="StartContainer for \"2dedc833d357b462be1cb1c4071c0c5e02374fef6b47fdd4ef1463f0b2e0f397\"" Dec 13 01:56:45.692526 systemd[1]: Started cri-containerd-2dedc833d357b462be1cb1c4071c0c5e02374fef6b47fdd4ef1463f0b2e0f397.scope - libcontainer container 2dedc833d357b462be1cb1c4071c0c5e02374fef6b47fdd4ef1463f0b2e0f397. Dec 13 01:56:45.763525 containerd[1546]: time="2024-12-13T01:56:45.763185428Z" level=info msg="StartContainer for \"2dedc833d357b462be1cb1c4071c0c5e02374fef6b47fdd4ef1463f0b2e0f397\" returns successfully" Dec 13 01:56:46.121438 kubelet[2805]: E1213 01:56:46.121347 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdngg" podUID="45fb1911-1ccb-4174-8fae-ff2967d97276" Dec 13 01:56:46.249997 kubelet[2805]: E1213 01:56:46.249971 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.249997 kubelet[2805]: W1213 01:56:46.249991 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.250131 kubelet[2805]: E1213 01:56:46.250007 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.250152 kubelet[2805]: E1213 01:56:46.250141 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.250152 kubelet[2805]: W1213 01:56:46.250147 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.250188 kubelet[2805]: E1213 01:56:46.250152 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.250260 kubelet[2805]: E1213 01:56:46.250251 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.250260 kubelet[2805]: W1213 01:56:46.250257 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.250308 kubelet[2805]: E1213 01:56:46.250263 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.250378 kubelet[2805]: E1213 01:56:46.250368 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.250378 kubelet[2805]: W1213 01:56:46.250377 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.250449 kubelet[2805]: E1213 01:56:46.250383 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.250511 kubelet[2805]: E1213 01:56:46.250501 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.250511 kubelet[2805]: W1213 01:56:46.250509 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.250776 kubelet[2805]: E1213 01:56:46.250513 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.250776 kubelet[2805]: E1213 01:56:46.250603 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.250776 kubelet[2805]: W1213 01:56:46.250607 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.250776 kubelet[2805]: E1213 01:56:46.250612 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.250776 kubelet[2805]: E1213 01:56:46.250737 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.250776 kubelet[2805]: W1213 01:56:46.250741 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.250776 kubelet[2805]: E1213 01:56:46.250746 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.250901 kubelet[2805]: E1213 01:56:46.250882 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.250901 kubelet[2805]: W1213 01:56:46.250887 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.250901 kubelet[2805]: E1213 01:56:46.250892 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.251011 kubelet[2805]: E1213 01:56:46.251001 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.251011 kubelet[2805]: W1213 01:56:46.251008 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.251060 kubelet[2805]: E1213 01:56:46.251013 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.251116 kubelet[2805]: E1213 01:56:46.251107 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.251116 kubelet[2805]: W1213 01:56:46.251115 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.251157 kubelet[2805]: E1213 01:56:46.251120 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.251215 kubelet[2805]: E1213 01:56:46.251206 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.251215 kubelet[2805]: W1213 01:56:46.251214 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.251283 kubelet[2805]: E1213 01:56:46.251220 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.251322 kubelet[2805]: E1213 01:56:46.251312 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.251322 kubelet[2805]: W1213 01:56:46.251320 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.251362 kubelet[2805]: E1213 01:56:46.251325 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.251442 kubelet[2805]: E1213 01:56:46.251432 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.251442 kubelet[2805]: W1213 01:56:46.251440 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.251488 kubelet[2805]: E1213 01:56:46.251445 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.251540 kubelet[2805]: E1213 01:56:46.251532 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.251540 kubelet[2805]: W1213 01:56:46.251538 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.251590 kubelet[2805]: E1213 01:56:46.251544 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.251646 kubelet[2805]: E1213 01:56:46.251637 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.251646 kubelet[2805]: W1213 01:56:46.251644 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.251689 kubelet[2805]: E1213 01:56:46.251649 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.255057 kubelet[2805]: E1213 01:56:46.254995 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.255057 kubelet[2805]: W1213 01:56:46.255010 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.255057 kubelet[2805]: E1213 01:56:46.255021 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.255255 kubelet[2805]: E1213 01:56:46.255171 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.255255 kubelet[2805]: W1213 01:56:46.255176 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.255255 kubelet[2805]: E1213 01:56:46.255184 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263658 kubelet[2805]: E1213 01:56:46.255297 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263658 kubelet[2805]: W1213 01:56:46.255304 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263658 kubelet[2805]: E1213 01:56:46.255314 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263658 kubelet[2805]: E1213 01:56:46.255552 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263658 kubelet[2805]: W1213 01:56:46.255558 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263658 kubelet[2805]: E1213 01:56:46.255567 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263658 kubelet[2805]: E1213 01:56:46.255672 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263658 kubelet[2805]: W1213 01:56:46.255676 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263658 kubelet[2805]: E1213 01:56:46.255684 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263658 kubelet[2805]: E1213 01:56:46.255789 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263824 kubelet[2805]: W1213 01:56:46.255794 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263824 kubelet[2805]: E1213 01:56:46.255803 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263824 kubelet[2805]: E1213 01:56:46.255909 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263824 kubelet[2805]: W1213 01:56:46.255913 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263824 kubelet[2805]: E1213 01:56:46.255920 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263824 kubelet[2805]: E1213 01:56:46.256009 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263824 kubelet[2805]: W1213 01:56:46.256014 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263824 kubelet[2805]: E1213 01:56:46.256024 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263824 kubelet[2805]: E1213 01:56:46.256105 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263824 kubelet[2805]: W1213 01:56:46.256110 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263985 kubelet[2805]: E1213 01:56:46.256120 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263985 kubelet[2805]: E1213 01:56:46.256226 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263985 kubelet[2805]: W1213 01:56:46.256230 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263985 kubelet[2805]: E1213 01:56:46.256270 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263985 kubelet[2805]: E1213 01:56:46.256412 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263985 kubelet[2805]: W1213 01:56:46.256417 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263985 kubelet[2805]: E1213 01:56:46.256432 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.263985 kubelet[2805]: E1213 01:56:46.256505 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.263985 kubelet[2805]: W1213 01:56:46.256509 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.263985 kubelet[2805]: E1213 01:56:46.256517 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.271810 kubelet[2805]: E1213 01:56:46.256656 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.271810 kubelet[2805]: W1213 01:56:46.256660 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.271810 kubelet[2805]: E1213 01:56:46.256668 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.271810 kubelet[2805]: E1213 01:56:46.256767 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.271810 kubelet[2805]: W1213 01:56:46.256772 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.271810 kubelet[2805]: E1213 01:56:46.256778 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.271810 kubelet[2805]: E1213 01:56:46.256872 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.271810 kubelet[2805]: W1213 01:56:46.256877 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.271810 kubelet[2805]: E1213 01:56:46.256882 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.271810 kubelet[2805]: E1213 01:56:46.256964 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.272020 kubelet[2805]: W1213 01:56:46.256968 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.272020 kubelet[2805]: E1213 01:56:46.256972 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.272020 kubelet[2805]: E1213 01:56:46.257061 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.272020 kubelet[2805]: W1213 01:56:46.257065 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.272020 kubelet[2805]: E1213 01:56:46.257070 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.272020 kubelet[2805]: E1213 01:56:46.257342 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:46.272020 kubelet[2805]: W1213 01:56:46.257348 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:46.272020 kubelet[2805]: E1213 01:56:46.257353 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:46.926431 containerd[1546]: time="2024-12-13T01:56:46.926235062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:46.926985 containerd[1546]: time="2024-12-13T01:56:46.926625298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:56:46.926985 containerd[1546]: time="2024-12-13T01:56:46.926964821Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:46.927992 containerd[1546]: time="2024-12-13T01:56:46.927969733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:46.928593 containerd[1546]: time="2024-12-13T01:56:46.928337420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.269792998s" Dec 13 01:56:46.928593 containerd[1546]: time="2024-12-13T01:56:46.928355522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:56:46.930353 containerd[1546]: time="2024-12-13T01:56:46.930313983Z" level=info msg="CreateContainer within sandbox \"25376105af5301b70e33c7985d65e46f6d3f63704f528b674b8cd2d9c1011e58\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:56:46.946194 containerd[1546]: time="2024-12-13T01:56:46.946170625Z" level=info msg="CreateContainer within sandbox \"25376105af5301b70e33c7985d65e46f6d3f63704f528b674b8cd2d9c1011e58\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a7925324cc708212e4b16cbd703fa459fc094d5d93d569faa38f7da813d1da64\"" Dec 13 01:56:46.946500 containerd[1546]: time="2024-12-13T01:56:46.946477736Z" level=info msg="StartContainer for \"a7925324cc708212e4b16cbd703fa459fc094d5d93d569faa38f7da813d1da64\"" Dec 13 01:56:46.965518 systemd[1]: Started cri-containerd-a7925324cc708212e4b16cbd703fa459fc094d5d93d569faa38f7da813d1da64.scope - libcontainer container a7925324cc708212e4b16cbd703fa459fc094d5d93d569faa38f7da813d1da64. Dec 13 01:56:46.980605 containerd[1546]: time="2024-12-13T01:56:46.980498410Z" level=info msg="StartContainer for \"a7925324cc708212e4b16cbd703fa459fc094d5d93d569faa38f7da813d1da64\" returns successfully" Dec 13 01:56:46.989236 systemd[1]: cri-containerd-a7925324cc708212e4b16cbd703fa459fc094d5d93d569faa38f7da813d1da64.scope: Deactivated successfully. Dec 13 01:56:47.005930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7925324cc708212e4b16cbd703fa459fc094d5d93d569faa38f7da813d1da64-rootfs.mount: Deactivated successfully. Dec 13 01:56:47.187419 kubelet[2805]: I1213 01:56:47.187297 2805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:56:47.221003 kubelet[2805]: I1213 01:56:47.220961 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65cf87b9c9-pgwcx" podStartSLOduration=2.43757806 podStartE2EDuration="4.22094754s" podCreationTimestamp="2024-12-13 01:56:43 +0000 UTC" firstStartedPulling="2024-12-13 01:56:43.874310039 +0000 UTC m=+22.843632611" lastFinishedPulling="2024-12-13 01:56:45.657679518 +0000 UTC m=+24.627002091" observedRunningTime="2024-12-13 01:56:46.190497357 +0000 UTC m=+25.159819938" watchObservedRunningTime="2024-12-13 01:56:47.22094754 +0000 UTC m=+26.190270129" Dec 13 01:56:47.296172 containerd[1546]: time="2024-12-13T01:56:47.281033868Z" level=info msg="shim disconnected" id=a7925324cc708212e4b16cbd703fa459fc094d5d93d569faa38f7da813d1da64 namespace=k8s.io Dec 13 01:56:47.296172 containerd[1546]: time="2024-12-13T01:56:47.296065263Z" level=warning msg="cleaning up after shim disconnected" id=a7925324cc708212e4b16cbd703fa459fc094d5d93d569faa38f7da813d1da64 namespace=k8s.io Dec 13 01:56:47.296172 containerd[1546]: time="2024-12-13T01:56:47.296075914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:48.121502 kubelet[2805]: E1213 01:56:48.121428 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdngg" podUID="45fb1911-1ccb-4174-8fae-ff2967d97276" Dec 13 01:56:48.203871 containerd[1546]: time="2024-12-13T01:56:48.203411494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:56:50.121112 kubelet[2805]: E1213 01:56:50.120795 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdngg" podUID="45fb1911-1ccb-4174-8fae-ff2967d97276" Dec 13 01:56:51.713818 containerd[1546]: time="2024-12-13T01:56:51.713779606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:51.719920 containerd[1546]: time="2024-12-13T01:56:51.719887602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:56:51.725488 containerd[1546]: time="2024-12-13T01:56:51.725442242Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:51.733013 containerd[1546]: time="2024-12-13T01:56:51.732982283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:51.733758 containerd[1546]: time="2024-12-13T01:56:51.733439224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.53000463s" Dec 13 01:56:51.733758 containerd[1546]: time="2024-12-13T01:56:51.733462032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:56:51.734954 containerd[1546]: time="2024-12-13T01:56:51.734886615Z" level=info msg="CreateContainer within sandbox \"25376105af5301b70e33c7985d65e46f6d3f63704f528b674b8cd2d9c1011e58\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:56:51.870683 containerd[1546]: time="2024-12-13T01:56:51.870625793Z" level=info msg="CreateContainer within sandbox \"25376105af5301b70e33c7985d65e46f6d3f63704f528b674b8cd2d9c1011e58\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"79a288bc88f82f62f4bdd774fc3aed10ab3d7113a58c47d231317256a6392771\"" Dec 13 01:56:51.871908 containerd[1546]: time="2024-12-13T01:56:51.871498752Z" level=info msg="StartContainer for \"79a288bc88f82f62f4bdd774fc3aed10ab3d7113a58c47d231317256a6392771\"" Dec 13 01:56:51.942617 systemd[1]: Started cri-containerd-79a288bc88f82f62f4bdd774fc3aed10ab3d7113a58c47d231317256a6392771.scope - libcontainer container 79a288bc88f82f62f4bdd774fc3aed10ab3d7113a58c47d231317256a6392771. Dec 13 01:56:51.987803 containerd[1546]: time="2024-12-13T01:56:51.987735618Z" level=info msg="StartContainer for \"79a288bc88f82f62f4bdd774fc3aed10ab3d7113a58c47d231317256a6392771\" returns successfully" Dec 13 01:56:52.121258 kubelet[2805]: E1213 01:56:52.121230 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdngg" podUID="45fb1911-1ccb-4174-8fae-ff2967d97276" Dec 13 01:56:53.113868 systemd[1]: cri-containerd-79a288bc88f82f62f4bdd774fc3aed10ab3d7113a58c47d231317256a6392771.scope: Deactivated successfully. Dec 13 01:56:53.148662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79a288bc88f82f62f4bdd774fc3aed10ab3d7113a58c47d231317256a6392771-rootfs.mount: Deactivated successfully. Dec 13 01:56:53.196407 kubelet[2805]: I1213 01:56:53.196365 2805 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:56:53.409313 kubelet[2805]: I1213 01:56:53.409215 2805 topology_manager.go:215] "Topology Admit Handler" podUID="a6e8c90c-6e1c-4e5f-a197-06bf87bcca01" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xpjth" Dec 13 01:56:53.490099 containerd[1546]: time="2024-12-13T01:56:53.490039031Z" level=info msg="shim disconnected" id=79a288bc88f82f62f4bdd774fc3aed10ab3d7113a58c47d231317256a6392771 namespace=k8s.io Dec 13 01:56:53.490099 containerd[1546]: time="2024-12-13T01:56:53.490092339Z" level=warning msg="cleaning up after shim disconnected" id=79a288bc88f82f62f4bdd774fc3aed10ab3d7113a58c47d231317256a6392771 namespace=k8s.io Dec 13 01:56:53.490099 containerd[1546]: time="2024-12-13T01:56:53.490100885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:53.502301 containerd[1546]: time="2024-12-13T01:56:53.502134048Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:56:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:56:53.508284 kubelet[2805]: I1213 01:56:53.507874 2805 topology_manager.go:215] "Topology Admit Handler" podUID="5b46a7a9-9e2f-4592-85a2-7b01c18de070" podNamespace="kube-system" podName="coredns-7db6d8ff4d-httmm" Dec 13 01:56:53.508284 kubelet[2805]: I1213 01:56:53.507989 2805 topology_manager.go:215] "Topology Admit Handler" podUID="2a968e95-a7f1-4b6c-9c03-61456caad8ed" podNamespace="calico-system" podName="calico-kube-controllers-745cd949dc-994lc" Dec 13 01:56:53.508284 kubelet[2805]: I1213 01:56:53.508049 2805 topology_manager.go:215] "Topology Admit Handler" podUID="f3155228-6a82-4b05-aa2f-1efe3f581565" podNamespace="calico-apiserver" podName="calico-apiserver-54b9d6d844-snjl9" Dec 13 01:56:53.508284 kubelet[2805]: I1213 01:56:53.508113 2805 topology_manager.go:215] "Topology Admit Handler" podUID="ad033622-c6e5-4ed2-aa0b-b2adc8bb3378" podNamespace="calico-apiserver" podName="calico-apiserver-54b9d6d844-bx8kz" Dec 13 01:56:53.554041 systemd[1]: Created slice kubepods-besteffort-podad033622_c6e5_4ed2_aa0b_b2adc8bb3378.slice - libcontainer container kubepods-besteffort-podad033622_c6e5_4ed2_aa0b_b2adc8bb3378.slice. Dec 13 01:56:53.563216 systemd[1]: Created slice kubepods-burstable-pod5b46a7a9_9e2f_4592_85a2_7b01c18de070.slice - libcontainer container kubepods-burstable-pod5b46a7a9_9e2f_4592_85a2_7b01c18de070.slice. Dec 13 01:56:53.566816 systemd[1]: Created slice kubepods-burstable-poda6e8c90c_6e1c_4e5f_a197_06bf87bcca01.slice - libcontainer container kubepods-burstable-poda6e8c90c_6e1c_4e5f_a197_06bf87bcca01.slice. Dec 13 01:56:53.571763 systemd[1]: Created slice kubepods-besteffort-pod2a968e95_a7f1_4b6c_9c03_61456caad8ed.slice - libcontainer container kubepods-besteffort-pod2a968e95_a7f1_4b6c_9c03_61456caad8ed.slice. Dec 13 01:56:53.575353 systemd[1]: Created slice kubepods-besteffort-podf3155228_6a82_4b05_aa2f_1efe3f581565.slice - libcontainer container kubepods-besteffort-podf3155228_6a82_4b05_aa2f_1efe3f581565.slice. Dec 13 01:56:53.657903 kubelet[2805]: I1213 01:56:53.657882 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b46a7a9-9e2f-4592-85a2-7b01c18de070-config-volume\") pod \"coredns-7db6d8ff4d-httmm\" (UID: \"5b46a7a9-9e2f-4592-85a2-7b01c18de070\") " pod="kube-system/coredns-7db6d8ff4d-httmm" Dec 13 01:56:53.658196 kubelet[2805]: I1213 01:56:53.658025 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9d65\" (UniqueName: \"kubernetes.io/projected/5b46a7a9-9e2f-4592-85a2-7b01c18de070-kube-api-access-n9d65\") pod \"coredns-7db6d8ff4d-httmm\" (UID: \"5b46a7a9-9e2f-4592-85a2-7b01c18de070\") " pod="kube-system/coredns-7db6d8ff4d-httmm" Dec 13 01:56:53.658196 kubelet[2805]: I1213 01:56:53.658050 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad033622-c6e5-4ed2-aa0b-b2adc8bb3378-calico-apiserver-certs\") pod \"calico-apiserver-54b9d6d844-bx8kz\" (UID: \"ad033622-c6e5-4ed2-aa0b-b2adc8bb3378\") " pod="calico-apiserver/calico-apiserver-54b9d6d844-bx8kz" Dec 13 01:56:53.658196 kubelet[2805]: I1213 01:56:53.658066 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a968e95-a7f1-4b6c-9c03-61456caad8ed-tigera-ca-bundle\") pod \"calico-kube-controllers-745cd949dc-994lc\" (UID: \"2a968e95-a7f1-4b6c-9c03-61456caad8ed\") " pod="calico-system/calico-kube-controllers-745cd949dc-994lc" Dec 13 01:56:53.658196 kubelet[2805]: I1213 01:56:53.658080 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrgj\" (UniqueName: \"kubernetes.io/projected/2a968e95-a7f1-4b6c-9c03-61456caad8ed-kube-api-access-mmrgj\") pod \"calico-kube-controllers-745cd949dc-994lc\" (UID: \"2a968e95-a7f1-4b6c-9c03-61456caad8ed\") " pod="calico-system/calico-kube-controllers-745cd949dc-994lc" Dec 13 01:56:53.658196 kubelet[2805]: I1213 01:56:53.658096 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj7m2\" (UniqueName: \"kubernetes.io/projected/f3155228-6a82-4b05-aa2f-1efe3f581565-kube-api-access-wj7m2\") pod \"calico-apiserver-54b9d6d844-snjl9\" (UID: \"f3155228-6a82-4b05-aa2f-1efe3f581565\") " pod="calico-apiserver/calico-apiserver-54b9d6d844-snjl9" Dec 13 01:56:53.658716 kubelet[2805]: I1213 01:56:53.658110 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fhpf\" (UniqueName: \"kubernetes.io/projected/a6e8c90c-6e1c-4e5f-a197-06bf87bcca01-kube-api-access-5fhpf\") pod \"coredns-7db6d8ff4d-xpjth\" (UID: \"a6e8c90c-6e1c-4e5f-a197-06bf87bcca01\") " pod="kube-system/coredns-7db6d8ff4d-xpjth" Dec 13 01:56:53.658716 kubelet[2805]: I1213 01:56:53.658129 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f3155228-6a82-4b05-aa2f-1efe3f581565-calico-apiserver-certs\") pod \"calico-apiserver-54b9d6d844-snjl9\" (UID: \"f3155228-6a82-4b05-aa2f-1efe3f581565\") " pod="calico-apiserver/calico-apiserver-54b9d6d844-snjl9" Dec 13 01:56:53.658716 kubelet[2805]: I1213 01:56:53.658145 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6e8c90c-6e1c-4e5f-a197-06bf87bcca01-config-volume\") pod \"coredns-7db6d8ff4d-xpjth\" (UID: \"a6e8c90c-6e1c-4e5f-a197-06bf87bcca01\") " pod="kube-system/coredns-7db6d8ff4d-xpjth" Dec 13 01:56:53.658716 kubelet[2805]: I1213 01:56:53.658159 2805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcxm\" (UniqueName: \"kubernetes.io/projected/ad033622-c6e5-4ed2-aa0b-b2adc8bb3378-kube-api-access-nhcxm\") pod \"calico-apiserver-54b9d6d844-bx8kz\" (UID: \"ad033622-c6e5-4ed2-aa0b-b2adc8bb3378\") " pod="calico-apiserver/calico-apiserver-54b9d6d844-bx8kz" Dec 13 01:56:53.860187 containerd[1546]: time="2024-12-13T01:56:53.860112445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b9d6d844-bx8kz,Uid:ad033622-c6e5-4ed2-aa0b-b2adc8bb3378,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:56:53.894325 containerd[1546]: time="2024-12-13T01:56:53.894076781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b9d6d844-snjl9,Uid:f3155228-6a82-4b05-aa2f-1efe3f581565,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:56:53.896592 containerd[1546]: time="2024-12-13T01:56:53.896576490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpjth,Uid:a6e8c90c-6e1c-4e5f-a197-06bf87bcca01,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:53.896785 containerd[1546]: time="2024-12-13T01:56:53.896774236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-httmm,Uid:5b46a7a9-9e2f-4592-85a2-7b01c18de070,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:53.896929 containerd[1546]: time="2024-12-13T01:56:53.896917928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745cd949dc-994lc,Uid:2a968e95-a7f1-4b6c-9c03-61456caad8ed,Namespace:calico-system,Attempt:0,}" Dec 13 01:56:54.125775 systemd[1]: Created slice kubepods-besteffort-pod45fb1911_1ccb_4174_8fae_ff2967d97276.slice - libcontainer container kubepods-besteffort-pod45fb1911_1ccb_4174_8fae_ff2967d97276.slice. Dec 13 01:56:54.128105 containerd[1546]: time="2024-12-13T01:56:54.127817837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdngg,Uid:45fb1911-1ccb-4174-8fae-ff2967d97276,Namespace:calico-system,Attempt:0,}" Dec 13 01:56:54.235605 containerd[1546]: time="2024-12-13T01:56:54.235563170Z" level=error msg="Failed to destroy network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.238317 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823-shm.mount: Deactivated successfully. Dec 13 01:56:54.261426 containerd[1546]: time="2024-12-13T01:56:54.261373942Z" level=error msg="encountered an error cleaning up failed sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.261780 containerd[1546]: time="2024-12-13T01:56:54.261762822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-httmm,Uid:5b46a7a9-9e2f-4592-85a2-7b01c18de070,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.267318 containerd[1546]: time="2024-12-13T01:56:54.267292302Z" level=error msg="Failed to destroy network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.278640 containerd[1546]: time="2024-12-13T01:56:54.278608757Z" level=error msg="encountered an error cleaning up failed sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.279678 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9-shm.mount: Deactivated successfully. Dec 13 01:56:54.287357 containerd[1546]: time="2024-12-13T01:56:54.280974694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745cd949dc-994lc,Uid:2a968e95-a7f1-4b6c-9c03-61456caad8ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.287357 containerd[1546]: time="2024-12-13T01:56:54.281377728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:56:54.287357 containerd[1546]: time="2024-12-13T01:56:54.284312289Z" level=error msg="Failed to destroy network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.287357 containerd[1546]: time="2024-12-13T01:56:54.285170495Z" level=error msg="encountered an error cleaning up failed sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.287357 containerd[1546]: time="2024-12-13T01:56:54.285198212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b9d6d844-bx8kz,Uid:ad033622-c6e5-4ed2-aa0b-b2adc8bb3378,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.287357 containerd[1546]: time="2024-12-13T01:56:54.286815675Z" level=error msg="Failed to destroy network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.287357 containerd[1546]: time="2024-12-13T01:56:54.287001416Z" level=error msg="encountered an error cleaning up failed sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.287357 containerd[1546]: time="2024-12-13T01:56:54.287027470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpjth,Uid:a6e8c90c-6e1c-4e5f-a197-06bf87bcca01,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.294626 containerd[1546]: time="2024-12-13T01:56:54.288710583Z" level=error msg="Failed to destroy network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.294626 containerd[1546]: time="2024-12-13T01:56:54.288843991Z" level=error msg="Failed to destroy network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.294626 containerd[1546]: time="2024-12-13T01:56:54.288900533Z" level=error msg="encountered an error cleaning up failed sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.294626 containerd[1546]: time="2024-12-13T01:56:54.288924935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdngg,Uid:45fb1911-1ccb-4174-8fae-ff2967d97276,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.294626 containerd[1546]: time="2024-12-13T01:56:54.289020682Z" level=error msg="encountered an error cleaning up failed sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.294626 containerd[1546]: time="2024-12-13T01:56:54.289052251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b9d6d844-snjl9,Uid:f3155228-6a82-4b05-aa2f-1efe3f581565,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.311547 kubelet[2805]: E1213 01:56:54.283655 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.312295 kubelet[2805]: E1213 01:56:54.303789 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.312295 kubelet[2805]: E1213 01:56:54.311533 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-httmm" Dec 13 01:56:54.312295 kubelet[2805]: E1213 01:56:54.311998 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54b9d6d844-snjl9" Dec 13 01:56:54.312295 kubelet[2805]: E1213 01:56:54.312002 2805 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-httmm" Dec 13 01:56:54.312486 kubelet[2805]: E1213 01:56:54.312013 2805 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54b9d6d844-snjl9" Dec 13 01:56:54.312486 kubelet[2805]: E1213 01:56:54.312036 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-httmm_kube-system(5b46a7a9-9e2f-4592-85a2-7b01c18de070)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-httmm_kube-system(5b46a7a9-9e2f-4592-85a2-7b01c18de070)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-httmm" podUID="5b46a7a9-9e2f-4592-85a2-7b01c18de070" Dec 13 01:56:54.312486 kubelet[2805]: E1213 01:56:54.312044 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54b9d6d844-snjl9_calico-apiserver(f3155228-6a82-4b05-aa2f-1efe3f581565)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54b9d6d844-snjl9_calico-apiserver(f3155228-6a82-4b05-aa2f-1efe3f581565)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54b9d6d844-snjl9" podUID="f3155228-6a82-4b05-aa2f-1efe3f581565" Dec 13 01:56:54.312618 kubelet[2805]: E1213 01:56:54.312086 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.312618 kubelet[2805]: E1213 01:56:54.312100 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54b9d6d844-bx8kz" Dec 13 01:56:54.312618 kubelet[2805]: E1213 01:56:54.312109 2805 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54b9d6d844-bx8kz" Dec 13 01:56:54.312682 kubelet[2805]: E1213 01:56:54.312132 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54b9d6d844-bx8kz_calico-apiserver(ad033622-c6e5-4ed2-aa0b-b2adc8bb3378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54b9d6d844-bx8kz_calico-apiserver(ad033622-c6e5-4ed2-aa0b-b2adc8bb3378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54b9d6d844-bx8kz" podUID="ad033622-c6e5-4ed2-aa0b-b2adc8bb3378" Dec 13 01:56:54.312682 kubelet[2805]: E1213 01:56:54.312153 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.312682 kubelet[2805]: E1213 01:56:54.312163 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xpjth" Dec 13 01:56:54.312875 kubelet[2805]: E1213 01:56:54.312170 2805 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xpjth" Dec 13 01:56:54.312875 kubelet[2805]: E1213 01:56:54.312183 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xpjth_kube-system(a6e8c90c-6e1c-4e5f-a197-06bf87bcca01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xpjth_kube-system(a6e8c90c-6e1c-4e5f-a197-06bf87bcca01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xpjth" podUID="a6e8c90c-6e1c-4e5f-a197-06bf87bcca01" Dec 13 01:56:54.312875 kubelet[2805]: E1213 01:56:54.312196 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.312984 kubelet[2805]: E1213 01:56:54.312206 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pdngg" Dec 13 01:56:54.312984 kubelet[2805]: E1213 01:56:54.281167 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:54.312984 kubelet[2805]: E1213 01:56:54.312227 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745cd949dc-994lc" Dec 13 01:56:54.312984 kubelet[2805]: E1213 01:56:54.312239 2805 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745cd949dc-994lc" Dec 13 01:56:54.313110 kubelet[2805]: E1213 01:56:54.312256 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-745cd949dc-994lc_calico-system(2a968e95-a7f1-4b6c-9c03-61456caad8ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-745cd949dc-994lc_calico-system(2a968e95-a7f1-4b6c-9c03-61456caad8ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745cd949dc-994lc" podUID="2a968e95-a7f1-4b6c-9c03-61456caad8ed" Dec 13 01:56:54.313110 kubelet[2805]: E1213 01:56:54.312213 2805 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pdngg" Dec 13 01:56:54.313110 kubelet[2805]: E1213 01:56:54.312277 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pdngg_calico-system(45fb1911-1ccb-4174-8fae-ff2967d97276)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pdngg_calico-system(45fb1911-1ccb-4174-8fae-ff2967d97276)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pdngg" podUID="45fb1911-1ccb-4174-8fae-ff2967d97276" Dec 13 01:56:55.149308 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58-shm.mount: Deactivated successfully. Dec 13 01:56:55.149693 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8-shm.mount: Deactivated successfully. Dec 13 01:56:55.149739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf-shm.mount: Deactivated successfully. Dec 13 01:56:55.149789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17-shm.mount: Deactivated successfully. Dec 13 01:56:55.231143 kubelet[2805]: I1213 01:56:55.230843 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:56:55.233986 kubelet[2805]: I1213 01:56:55.231820 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:56:55.265858 kubelet[2805]: I1213 01:56:55.265578 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:56:55.281551 kubelet[2805]: I1213 01:56:55.281534 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:56:55.288714 containerd[1546]: time="2024-12-13T01:56:55.288147155Z" level=info msg="StopPodSandbox for \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\"" Dec 13 01:56:55.289317 containerd[1546]: time="2024-12-13T01:56:55.289119404Z" level=info msg="StopPodSandbox for \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\"" Dec 13 01:56:55.289584 containerd[1546]: time="2024-12-13T01:56:55.289339096Z" level=info msg="Ensure that sandbox 52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17 in task-service has been cleanup successfully" Dec 13 01:56:55.289584 containerd[1546]: time="2024-12-13T01:56:55.289380938Z" level=info msg="StopPodSandbox for \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\"" Dec 13 01:56:55.289584 containerd[1546]: time="2024-12-13T01:56:55.289458278Z" level=info msg="Ensure that sandbox 4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9 in task-service has been cleanup successfully" Dec 13 01:56:55.290218 containerd[1546]: time="2024-12-13T01:56:55.289344930Z" level=info msg="Ensure that sandbox 34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823 in task-service has been cleanup successfully" Dec 13 01:56:55.290313 containerd[1546]: time="2024-12-13T01:56:55.290302248Z" level=info msg="StopPodSandbox for \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\"" Dec 13 01:56:55.291962 containerd[1546]: time="2024-12-13T01:56:55.291943694Z" level=info msg="Ensure that sandbox 95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58 in task-service has been cleanup successfully" Dec 13 01:56:55.298623 kubelet[2805]: I1213 01:56:55.298604 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:56:55.301560 containerd[1546]: time="2024-12-13T01:56:55.301459645Z" level=info msg="StopPodSandbox for \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\"" Dec 13 01:56:55.302072 containerd[1546]: time="2024-12-13T01:56:55.302017947Z" level=info msg="Ensure that sandbox 936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8 in task-service has been cleanup successfully" Dec 13 01:56:55.303766 kubelet[2805]: I1213 01:56:55.303029 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:56:55.303851 containerd[1546]: time="2024-12-13T01:56:55.303415892Z" level=info msg="StopPodSandbox for \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\"" Dec 13 01:56:55.303851 containerd[1546]: time="2024-12-13T01:56:55.303520969Z" level=info msg="Ensure that sandbox 55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf in task-service has been cleanup successfully" Dec 13 01:56:55.352408 containerd[1546]: time="2024-12-13T01:56:55.352365974Z" level=error msg="StopPodSandbox for \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\" failed" error="failed to destroy network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:55.352648 kubelet[2805]: E1213 01:56:55.352623 2805 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:56:55.353568 containerd[1546]: time="2024-12-13T01:56:55.353510346Z" level=error msg="StopPodSandbox for \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\" failed" error="failed to destroy network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:55.353603 kubelet[2805]: E1213 01:56:55.353586 2805 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:56:55.354055 containerd[1546]: time="2024-12-13T01:56:55.353957907Z" level=error msg="StopPodSandbox for \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\" failed" error="failed to destroy network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:55.359881 containerd[1546]: time="2024-12-13T01:56:55.359861364Z" level=error msg="StopPodSandbox for \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\" failed" error="failed to destroy network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:55.359937 containerd[1546]: time="2024-12-13T01:56:55.359924980Z" level=error msg="StopPodSandbox for \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\" failed" error="failed to destroy network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:55.361183 kubelet[2805]: E1213 01:56:55.352909 2805 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8"} Dec 13 01:56:55.361183 kubelet[2805]: E1213 01:56:55.361077 2805 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6e8c90c-6e1c-4e5f-a197-06bf87bcca01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:55.361183 kubelet[2805]: E1213 01:56:55.361077 2805 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:56:55.361183 kubelet[2805]: E1213 01:56:55.361100 2805 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17"} Dec 13 01:56:55.361183 kubelet[2805]: E1213 01:56:55.361105 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6e8c90c-6e1c-4e5f-a197-06bf87bcca01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xpjth" podUID="a6e8c90c-6e1c-4e5f-a197-06bf87bcca01" Dec 13 01:56:55.361331 kubelet[2805]: E1213 01:56:55.361116 2805 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad033622-c6e5-4ed2-aa0b-b2adc8bb3378\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:55.361331 kubelet[2805]: E1213 01:56:55.353604 2805 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58"} Dec 13 01:56:55.361331 kubelet[2805]: E1213 01:56:55.361131 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad033622-c6e5-4ed2-aa0b-b2adc8bb3378\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54b9d6d844-bx8kz" podUID="ad033622-c6e5-4ed2-aa0b-b2adc8bb3378" Dec 13 01:56:55.361331 kubelet[2805]: E1213 01:56:55.361133 2805 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45fb1911-1ccb-4174-8fae-ff2967d97276\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:55.361460 kubelet[2805]: E1213 01:56:55.361148 2805 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:56:55.361460 kubelet[2805]: E1213 01:56:55.361160 2805 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9"} Dec 13 01:56:55.361460 kubelet[2805]: E1213 01:56:55.361160 2805 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:56:55.361460 kubelet[2805]: E1213 01:56:55.361170 2805 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823"} Dec 13 01:56:55.361460 kubelet[2805]: E1213 01:56:55.361148 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45fb1911-1ccb-4174-8fae-ff2967d97276\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pdngg" podUID="45fb1911-1ccb-4174-8fae-ff2967d97276" Dec 13 01:56:55.361555 kubelet[2805]: E1213 01:56:55.361184 2805 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b46a7a9-9e2f-4592-85a2-7b01c18de070\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:55.361555 kubelet[2805]: E1213 01:56:55.361194 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b46a7a9-9e2f-4592-85a2-7b01c18de070\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-httmm" podUID="5b46a7a9-9e2f-4592-85a2-7b01c18de070" Dec 13 01:56:55.361555 kubelet[2805]: E1213 01:56:55.361170 2805 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2a968e95-a7f1-4b6c-9c03-61456caad8ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:55.361640 kubelet[2805]: E1213 01:56:55.361208 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2a968e95-a7f1-4b6c-9c03-61456caad8ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745cd949dc-994lc" podUID="2a968e95-a7f1-4b6c-9c03-61456caad8ed" Dec 13 01:56:55.365616 containerd[1546]: time="2024-12-13T01:56:55.365596165Z" level=error msg="StopPodSandbox for \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\" failed" error="failed to destroy network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:55.365699 kubelet[2805]: E1213 01:56:55.365682 2805 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:56:55.365727 kubelet[2805]: E1213 01:56:55.365702 2805 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf"} Dec 13 01:56:55.365746 kubelet[2805]: E1213 01:56:55.365719 2805 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3155228-6a82-4b05-aa2f-1efe3f581565\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:55.365782 kubelet[2805]: E1213 01:56:55.365747 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3155228-6a82-4b05-aa2f-1efe3f581565\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54b9d6d844-snjl9" podUID="f3155228-6a82-4b05-aa2f-1efe3f581565" Dec 13 01:56:59.490233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658651964.mount: Deactivated successfully. Dec 13 01:56:59.584631 containerd[1546]: time="2024-12-13T01:56:59.581281069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:56:59.585688 containerd[1546]: time="2024-12-13T01:56:59.579264050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:59.586140 containerd[1546]: time="2024-12-13T01:56:59.586125030Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:59.586548 containerd[1546]: time="2024-12-13T01:56:59.586532049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:59.586979 containerd[1546]: time="2024-12-13T01:56:59.586953900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.303792984s" Dec 13 01:56:59.587027 containerd[1546]: time="2024-12-13T01:56:59.586980678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:56:59.618128 containerd[1546]: time="2024-12-13T01:56:59.618094485Z" level=info msg="CreateContainer within sandbox \"25376105af5301b70e33c7985d65e46f6d3f63704f528b674b8cd2d9c1011e58\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:56:59.721586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182698476.mount: Deactivated successfully. Dec 13 01:56:59.771181 containerd[1546]: time="2024-12-13T01:56:59.771071088Z" level=info msg="CreateContainer within sandbox \"25376105af5301b70e33c7985d65e46f6d3f63704f528b674b8cd2d9c1011e58\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9a74cdda4831310496da2884b1555e4d835686129b7fdfee31c8f108e55c5acd\"" Dec 13 01:56:59.789368 containerd[1546]: time="2024-12-13T01:56:59.789346441Z" level=info msg="StartContainer for \"9a74cdda4831310496da2884b1555e4d835686129b7fdfee31c8f108e55c5acd\"" Dec 13 01:56:59.946500 systemd[1]: Started cri-containerd-9a74cdda4831310496da2884b1555e4d835686129b7fdfee31c8f108e55c5acd.scope - libcontainer container 9a74cdda4831310496da2884b1555e4d835686129b7fdfee31c8f108e55c5acd. Dec 13 01:56:59.968465 containerd[1546]: time="2024-12-13T01:56:59.968421004Z" level=info msg="StartContainer for \"9a74cdda4831310496da2884b1555e4d835686129b7fdfee31c8f108e55c5acd\" returns successfully" Dec 13 01:57:00.431689 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:57:00.436937 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:57:01.369557 systemd[1]: run-containerd-runc-k8s.io-9a74cdda4831310496da2884b1555e4d835686129b7fdfee31c8f108e55c5acd-runc.XcPTOB.mount: Deactivated successfully. Dec 13 01:57:02.137407 kernel: bpftool[4051]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:57:02.306004 systemd-networkd[1448]: vxlan.calico: Link UP Dec 13 01:57:02.306008 systemd-networkd[1448]: vxlan.calico: Gained carrier Dec 13 01:57:03.983509 systemd-networkd[1448]: vxlan.calico: Gained IPv6LL Dec 13 01:57:06.121582 containerd[1546]: time="2024-12-13T01:57:06.121542604Z" level=info msg="StopPodSandbox for \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\"" Dec 13 01:57:06.182056 kubelet[2805]: I1213 01:57:06.180809 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6f9sx" podStartSLOduration=7.473914431 podStartE2EDuration="23.174702602s" podCreationTimestamp="2024-12-13 01:56:43 +0000 UTC" firstStartedPulling="2024-12-13 01:56:43.888735666 +0000 UTC m=+22.858058238" lastFinishedPulling="2024-12-13 01:56:59.589523837 +0000 UTC m=+38.558846409" observedRunningTime="2024-12-13 01:57:00.461369006 +0000 UTC m=+39.430691588" watchObservedRunningTime="2024-12-13 01:57:06.174702602 +0000 UTC m=+45.144025178" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.174 [INFO][4156] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.174 [INFO][4156] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" iface="eth0" netns="/var/run/netns/cni-fd923151-1f4d-335a-462d-2398bc4ef3ee" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.174 [INFO][4156] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" iface="eth0" netns="/var/run/netns/cni-fd923151-1f4d-335a-462d-2398bc4ef3ee" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.177 [INFO][4156] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" iface="eth0" netns="/var/run/netns/cni-fd923151-1f4d-335a-462d-2398bc4ef3ee" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.177 [INFO][4156] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.177 [INFO][4156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.494 [INFO][4162] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" HandleID="k8s-pod-network.52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.497 [INFO][4162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.497 [INFO][4162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.505 [WARNING][4162] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" HandleID="k8s-pod-network.52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.505 [INFO][4162] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" HandleID="k8s-pod-network.52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.505 [INFO][4162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:06.507974 containerd[1546]: 2024-12-13 01:57:06.506 [INFO][4156] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:06.509793 systemd[1]: run-netns-cni\x2dfd923151\x2d1f4d\x2d335a\x2d462d\x2d2398bc4ef3ee.mount: Deactivated successfully. Dec 13 01:57:06.512881 containerd[1546]: time="2024-12-13T01:57:06.512862354Z" level=info msg="TearDown network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\" successfully" Dec 13 01:57:06.512936 containerd[1546]: time="2024-12-13T01:57:06.512928175Z" level=info msg="StopPodSandbox for \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\" returns successfully" Dec 13 01:57:06.518848 containerd[1546]: time="2024-12-13T01:57:06.518836319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b9d6d844-bx8kz,Uid:ad033622-c6e5-4ed2-aa0b-b2adc8bb3378,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:57:06.602137 systemd-networkd[1448]: cali95fc02eb710: Link UP Dec 13 01:57:06.602741 systemd-networkd[1448]: cali95fc02eb710: Gained carrier Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.551 [INFO][4169] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0 calico-apiserver-54b9d6d844- calico-apiserver ad033622-c6e5-4ed2-aa0b-b2adc8bb3378 743 0 2024-12-13 01:56:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54b9d6d844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54b9d6d844-bx8kz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali95fc02eb710 [] []}} ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-bx8kz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.551 [INFO][4169] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-bx8kz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.574 [INFO][4180] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" HandleID="k8s-pod-network.db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.581 [INFO][4180] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" HandleID="k8s-pod-network.db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54b9d6d844-bx8kz", "timestamp":"2024-12-13 01:57:06.574011753 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.581 [INFO][4180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.581 [INFO][4180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.581 [INFO][4180] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.582 [INFO][4180] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" host="localhost" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.587 [INFO][4180] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.589 [INFO][4180] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.590 [INFO][4180] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.591 [INFO][4180] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.591 [INFO][4180] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" host="localhost" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.592 [INFO][4180] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.594 [INFO][4180] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" host="localhost" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.597 [INFO][4180] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" host="localhost" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.597 [INFO][4180] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" host="localhost" Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.597 [INFO][4180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:06.612430 containerd[1546]: 2024-12-13 01:57:06.597 [INFO][4180] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" HandleID="k8s-pod-network.db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.612812 containerd[1546]: 2024-12-13 01:57:06.599 [INFO][4169] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-bx8kz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0", GenerateName:"calico-apiserver-54b9d6d844-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad033622-c6e5-4ed2-aa0b-b2adc8bb3378", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b9d6d844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54b9d6d844-bx8kz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95fc02eb710", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:06.612812 containerd[1546]: 2024-12-13 01:57:06.599 [INFO][4169] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-bx8kz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.612812 containerd[1546]: 2024-12-13 01:57:06.599 [INFO][4169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95fc02eb710 ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-bx8kz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.612812 containerd[1546]: 2024-12-13 01:57:06.603 [INFO][4169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-bx8kz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.612812 containerd[1546]: 2024-12-13 01:57:06.603 [INFO][4169] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-bx8kz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0", GenerateName:"calico-apiserver-54b9d6d844-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad033622-c6e5-4ed2-aa0b-b2adc8bb3378", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b9d6d844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb", Pod:"calico-apiserver-54b9d6d844-bx8kz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95fc02eb710", MAC:"f2:20:2c:cb:21:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:06.612812 containerd[1546]: 2024-12-13 01:57:06.608 [INFO][4169] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-bx8kz" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:06.627896 containerd[1546]: time="2024-12-13T01:57:06.627782461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:06.628241 containerd[1546]: time="2024-12-13T01:57:06.628217736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:06.628404 containerd[1546]: time="2024-12-13T01:57:06.628337365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:06.629142 containerd[1546]: time="2024-12-13T01:57:06.629108481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:06.642471 systemd[1]: Started cri-containerd-db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb.scope - libcontainer container db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb. Dec 13 01:57:06.651685 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:57:06.675946 containerd[1546]: time="2024-12-13T01:57:06.675702377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b9d6d844-bx8kz,Uid:ad033622-c6e5-4ed2-aa0b-b2adc8bb3378,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb\"" Dec 13 01:57:06.681323 containerd[1546]: time="2024-12-13T01:57:06.680466589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:57:07.122909 containerd[1546]: time="2024-12-13T01:57:07.122671491Z" level=info msg="StopPodSandbox for \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\"" Dec 13 01:57:07.122909 containerd[1546]: time="2024-12-13T01:57:07.122895549Z" level=info msg="StopPodSandbox for \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\"" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.163 [INFO][4264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.163 [INFO][4264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" iface="eth0" netns="/var/run/netns/cni-6eeea60b-eb56-4caa-4525-0c9c560e7625" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.164 [INFO][4264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" iface="eth0" netns="/var/run/netns/cni-6eeea60b-eb56-4caa-4525-0c9c560e7625" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.164 [INFO][4264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" iface="eth0" netns="/var/run/netns/cni-6eeea60b-eb56-4caa-4525-0c9c560e7625" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.164 [INFO][4264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.164 [INFO][4264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.188 [INFO][4280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" HandleID="k8s-pod-network.95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.188 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.188 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.192 [WARNING][4280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" HandleID="k8s-pod-network.95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.192 [INFO][4280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" HandleID="k8s-pod-network.95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.193 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:07.196722 containerd[1546]: 2024-12-13 01:57:07.195 [INFO][4264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:07.199708 containerd[1546]: time="2024-12-13T01:57:07.196821352Z" level=info msg="TearDown network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\" successfully" Dec 13 01:57:07.199708 containerd[1546]: time="2024-12-13T01:57:07.196868561Z" level=info msg="StopPodSandbox for \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\" returns successfully" Dec 13 01:57:07.199708 containerd[1546]: time="2024-12-13T01:57:07.198171313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdngg,Uid:45fb1911-1ccb-4174-8fae-ff2967d97276,Namespace:calico-system,Attempt:1,}" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.160 [INFO][4271] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.163 [INFO][4271] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" iface="eth0" netns="/var/run/netns/cni-837a2117-4c90-b009-8dcb-dcbf10201968" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.163 [INFO][4271] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" iface="eth0" netns="/var/run/netns/cni-837a2117-4c90-b009-8dcb-dcbf10201968" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.164 [INFO][4271] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" iface="eth0" netns="/var/run/netns/cni-837a2117-4c90-b009-8dcb-dcbf10201968" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.164 [INFO][4271] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.164 [INFO][4271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.188 [INFO][4279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" HandleID="k8s-pod-network.55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.189 [INFO][4279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.193 [INFO][4279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.198 [WARNING][4279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" HandleID="k8s-pod-network.55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.198 [INFO][4279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" HandleID="k8s-pod-network.55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.202 [INFO][4279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:07.205204 containerd[1546]: 2024-12-13 01:57:07.204 [INFO][4271] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:07.205560 containerd[1546]: time="2024-12-13T01:57:07.205425138Z" level=info msg="TearDown network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\" successfully" Dec 13 01:57:07.205560 containerd[1546]: time="2024-12-13T01:57:07.205439020Z" level=info msg="StopPodSandbox for \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\" returns successfully" Dec 13 01:57:07.209838 containerd[1546]: time="2024-12-13T01:57:07.209614529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b9d6d844-snjl9,Uid:f3155228-6a82-4b05-aa2f-1efe3f581565,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:57:07.291962 systemd-networkd[1448]: cali8d56795af7f: Link UP Dec 13 01:57:07.292892 systemd-networkd[1448]: cali8d56795af7f: Gained carrier Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.235 [INFO][4291] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pdngg-eth0 csi-node-driver- calico-system 45fb1911-1ccb-4174-8fae-ff2967d97276 754 0 2024-12-13 01:56:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pdngg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8d56795af7f [] []}} ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Namespace="calico-system" Pod="csi-node-driver-pdngg" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdngg-" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.235 [INFO][4291] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Namespace="calico-system" Pod="csi-node-driver-pdngg" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.260 [INFO][4314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" HandleID="k8s-pod-network.94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.268 [INFO][4314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" HandleID="k8s-pod-network.94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pdngg", "timestamp":"2024-12-13 01:57:07.260862205 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.268 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.269 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.269 [INFO][4314] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.270 [INFO][4314] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" host="localhost" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.273 [INFO][4314] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.277 [INFO][4314] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.278 [INFO][4314] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.279 [INFO][4314] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.279 [INFO][4314] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" host="localhost" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.280 [INFO][4314] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.284 [INFO][4314] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" host="localhost" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.288 [INFO][4314] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" host="localhost" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.288 [INFO][4314] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" host="localhost" Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.288 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:07.306555 containerd[1546]: 2024-12-13 01:57:07.288 [INFO][4314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" HandleID="k8s-pod-network.94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.310081 containerd[1546]: 2024-12-13 01:57:07.289 [INFO][4291] cni-plugin/k8s.go 386: Populated endpoint ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Namespace="calico-system" Pod="csi-node-driver-pdngg" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdngg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pdngg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45fb1911-1ccb-4174-8fae-ff2967d97276", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pdngg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d56795af7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:07.310081 containerd[1546]: 2024-12-13 01:57:07.289 [INFO][4291] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Namespace="calico-system" Pod="csi-node-driver-pdngg" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.310081 containerd[1546]: 2024-12-13 01:57:07.289 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d56795af7f ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Namespace="calico-system" Pod="csi-node-driver-pdngg" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.310081 containerd[1546]: 2024-12-13 01:57:07.292 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Namespace="calico-system" Pod="csi-node-driver-pdngg" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.310081 containerd[1546]: 2024-12-13 01:57:07.293 [INFO][4291] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Namespace="calico-system" Pod="csi-node-driver-pdngg" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdngg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pdngg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45fb1911-1ccb-4174-8fae-ff2967d97276", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e", Pod:"csi-node-driver-pdngg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d56795af7f", MAC:"22:0b:f7:95:e6:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:07.310081 containerd[1546]: 2024-12-13 01:57:07.304 [INFO][4291] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e" Namespace="calico-system" Pod="csi-node-driver-pdngg" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:07.317904 systemd-networkd[1448]: cali1692e8cd9b4: Link UP Dec 13 01:57:07.318507 systemd-networkd[1448]: cali1692e8cd9b4: Gained carrier Dec 13 01:57:07.337811 containerd[1546]: time="2024-12-13T01:57:07.337660773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:07.337811 containerd[1546]: time="2024-12-13T01:57:07.337709044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:07.337811 containerd[1546]: time="2024-12-13T01:57:07.337721436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:07.337986 containerd[1546]: time="2024-12-13T01:57:07.337777178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.248 [INFO][4305] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0 calico-apiserver-54b9d6d844- calico-apiserver f3155228-6a82-4b05-aa2f-1efe3f581565 753 0 2024-12-13 01:56:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54b9d6d844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54b9d6d844-snjl9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1692e8cd9b4 [] []}} ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-snjl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.249 [INFO][4305] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-snjl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.274 [INFO][4319] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" HandleID="k8s-pod-network.796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.281 [INFO][4319] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" HandleID="k8s-pod-network.796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318e10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54b9d6d844-snjl9", "timestamp":"2024-12-13 01:57:07.274476657 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.281 [INFO][4319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.288 [INFO][4319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.288 [INFO][4319] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.289 [INFO][4319] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" host="localhost" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.295 [INFO][4319] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.300 [INFO][4319] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.302 [INFO][4319] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.303 [INFO][4319] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.303 [INFO][4319] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" host="localhost" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.305 [INFO][4319] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.310 [INFO][4319] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" host="localhost" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.314 [INFO][4319] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" host="localhost" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.314 [INFO][4319] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" host="localhost" Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.314 [INFO][4319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:07.338028 containerd[1546]: 2024-12-13 01:57:07.314 [INFO][4319] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" HandleID="k8s-pod-network.796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.339189 containerd[1546]: 2024-12-13 01:57:07.316 [INFO][4305] cni-plugin/k8s.go 386: Populated endpoint ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-snjl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0", GenerateName:"calico-apiserver-54b9d6d844-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3155228-6a82-4b05-aa2f-1efe3f581565", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b9d6d844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54b9d6d844-snjl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1692e8cd9b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:07.339189 containerd[1546]: 2024-12-13 01:57:07.316 [INFO][4305] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-snjl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.339189 containerd[1546]: 2024-12-13 01:57:07.316 [INFO][4305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1692e8cd9b4 ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-snjl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.339189 containerd[1546]: 2024-12-13 01:57:07.319 [INFO][4305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-snjl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.339189 containerd[1546]: 2024-12-13 01:57:07.320 [INFO][4305] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-snjl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0", GenerateName:"calico-apiserver-54b9d6d844-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3155228-6a82-4b05-aa2f-1efe3f581565", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b9d6d844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b", Pod:"calico-apiserver-54b9d6d844-snjl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1692e8cd9b4", MAC:"ee:66:b2:02:51:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:07.339189 containerd[1546]: 2024-12-13 01:57:07.334 [INFO][4305] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b" Namespace="calico-apiserver" Pod="calico-apiserver-54b9d6d844-snjl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:07.354863 systemd[1]: Started cri-containerd-94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e.scope - libcontainer container 94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e. Dec 13 01:57:07.359515 containerd[1546]: time="2024-12-13T01:57:07.359439894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:07.359515 containerd[1546]: time="2024-12-13T01:57:07.359489073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:07.359515 containerd[1546]: time="2024-12-13T01:57:07.359496956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:07.359719 containerd[1546]: time="2024-12-13T01:57:07.359683873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:07.366251 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:57:07.380510 systemd[1]: Started cri-containerd-796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b.scope - libcontainer container 796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b. Dec 13 01:57:07.382036 containerd[1546]: time="2024-12-13T01:57:07.381984403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdngg,Uid:45fb1911-1ccb-4174-8fae-ff2967d97276,Namespace:calico-system,Attempt:1,} returns sandbox id \"94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e\"" Dec 13 01:57:07.390602 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:57:07.413167 containerd[1546]: time="2024-12-13T01:57:07.413105650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54b9d6d844-snjl9,Uid:f3155228-6a82-4b05-aa2f-1efe3f581565,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b\"" Dec 13 01:57:07.513118 systemd[1]: run-netns-cni\x2d6eeea60b\x2deb56\x2d4caa\x2d4525\x2d0c9c560e7625.mount: Deactivated successfully. Dec 13 01:57:07.513182 systemd[1]: run-netns-cni\x2d837a2117\x2d4c90\x2db009\x2d8dcb\x2ddcbf10201968.mount: Deactivated successfully. Dec 13 01:57:08.271938 systemd-networkd[1448]: cali95fc02eb710: Gained IPv6LL Dec 13 01:57:08.399490 systemd-networkd[1448]: cali8d56795af7f: Gained IPv6LL Dec 13 01:57:08.618379 containerd[1546]: time="2024-12-13T01:57:08.618251420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:08.619129 containerd[1546]: time="2024-12-13T01:57:08.619064936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:57:08.619812 containerd[1546]: time="2024-12-13T01:57:08.619610014Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:08.621599 containerd[1546]: time="2024-12-13T01:57:08.621543608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:08.622379 containerd[1546]: time="2024-12-13T01:57:08.622122467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.941637643s" Dec 13 01:57:08.622379 containerd[1546]: time="2024-12-13T01:57:08.622146783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:57:08.623256 containerd[1546]: time="2024-12-13T01:57:08.623081372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:57:08.624926 containerd[1546]: time="2024-12-13T01:57:08.624848026Z" level=info msg="CreateContainer within sandbox \"db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:57:08.659095 containerd[1546]: time="2024-12-13T01:57:08.659042280Z" level=info msg="CreateContainer within sandbox \"db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"03673f030055625705645d18e62be47f32f63aa2dfa54a21c5f826442292419e\"" Dec 13 01:57:08.659941 containerd[1546]: time="2024-12-13T01:57:08.659369312Z" level=info msg="StartContainer for \"03673f030055625705645d18e62be47f32f63aa2dfa54a21c5f826442292419e\"" Dec 13 01:57:08.677593 systemd[1]: run-containerd-runc-k8s.io-03673f030055625705645d18e62be47f32f63aa2dfa54a21c5f826442292419e-runc.qhXx7B.mount: Deactivated successfully. Dec 13 01:57:08.688623 systemd[1]: Started cri-containerd-03673f030055625705645d18e62be47f32f63aa2dfa54a21c5f826442292419e.scope - libcontainer container 03673f030055625705645d18e62be47f32f63aa2dfa54a21c5f826442292419e. Dec 13 01:57:08.733135 containerd[1546]: time="2024-12-13T01:57:08.732728282Z" level=info msg="StartContainer for \"03673f030055625705645d18e62be47f32f63aa2dfa54a21c5f826442292419e\" returns successfully" Dec 13 01:57:09.122457 containerd[1546]: time="2024-12-13T01:57:09.122225159Z" level=info msg="StopPodSandbox for \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\"" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.151 [INFO][4500] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.151 [INFO][4500] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" iface="eth0" netns="/var/run/netns/cni-3ee6394a-5243-ac4a-d85e-777c676019fa" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.152 [INFO][4500] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" iface="eth0" netns="/var/run/netns/cni-3ee6394a-5243-ac4a-d85e-777c676019fa" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.152 [INFO][4500] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" iface="eth0" netns="/var/run/netns/cni-3ee6394a-5243-ac4a-d85e-777c676019fa" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.152 [INFO][4500] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.152 [INFO][4500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.167 [INFO][4507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" HandleID="k8s-pod-network.936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.168 [INFO][4507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.168 [INFO][4507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.178 [WARNING][4507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" HandleID="k8s-pod-network.936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.178 [INFO][4507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" HandleID="k8s-pod-network.936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.181 [INFO][4507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:09.185529 containerd[1546]: 2024-12-13 01:57:09.182 [INFO][4500] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:09.191486 containerd[1546]: time="2024-12-13T01:57:09.185639117Z" level=info msg="TearDown network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\" successfully" Dec 13 01:57:09.191486 containerd[1546]: time="2024-12-13T01:57:09.185657056Z" level=info msg="StopPodSandbox for \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\" returns successfully" Dec 13 01:57:09.191486 containerd[1546]: time="2024-12-13T01:57:09.186011575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpjth,Uid:a6e8c90c-6e1c-4e5f-a197-06bf87bcca01,Namespace:kube-system,Attempt:1,}" Dec 13 01:57:09.187711 systemd[1]: run-netns-cni\x2d3ee6394a\x2d5243\x2dac4a\x2dd85e\x2d777c676019fa.mount: Deactivated successfully. Dec 13 01:57:09.282424 systemd-networkd[1448]: cali9e69d4c9afe: Link UP Dec 13 01:57:09.283039 systemd-networkd[1448]: cali9e69d4c9afe: Gained carrier Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.232 [INFO][4515] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0 coredns-7db6d8ff4d- kube-system a6e8c90c-6e1c-4e5f-a197-06bf87bcca01 770 0 2024-12-13 01:56:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-xpjth eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e69d4c9afe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xpjth" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xpjth-" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.232 [INFO][4515] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xpjth" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.259 [INFO][4525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" HandleID="k8s-pod-network.c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.267 [INFO][4525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" HandleID="k8s-pod-network.c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002916e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-xpjth", "timestamp":"2024-12-13 01:57:09.259425779 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.267 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.267 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.267 [INFO][4525] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.268 [INFO][4525] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" host="localhost" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.270 [INFO][4525] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.272 [INFO][4525] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.273 [INFO][4525] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.274 [INFO][4525] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.274 [INFO][4525] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" host="localhost" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.275 [INFO][4525] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.277 [INFO][4525] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" host="localhost" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.279 [INFO][4525] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" host="localhost" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.279 [INFO][4525] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" host="localhost" Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.279 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:09.293209 containerd[1546]: 2024-12-13 01:57:09.279 [INFO][4525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" HandleID="k8s-pod-network.c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.296043 containerd[1546]: 2024-12-13 01:57:09.280 [INFO][4515] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xpjth" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a6e8c90c-6e1c-4e5f-a197-06bf87bcca01", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-xpjth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e69d4c9afe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:09.296043 containerd[1546]: 2024-12-13 01:57:09.280 [INFO][4515] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xpjth" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.296043 containerd[1546]: 2024-12-13 01:57:09.280 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e69d4c9afe ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xpjth" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.296043 containerd[1546]: 2024-12-13 01:57:09.283 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xpjth" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.296043 containerd[1546]: 2024-12-13 01:57:09.283 [INFO][4515] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xpjth" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a6e8c90c-6e1c-4e5f-a197-06bf87bcca01", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea", Pod:"coredns-7db6d8ff4d-xpjth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e69d4c9afe", MAC:"6a:52:d6:76:0b:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:09.296043 containerd[1546]: 2024-12-13 01:57:09.288 [INFO][4515] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xpjth" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:09.295792 systemd-networkd[1448]: cali1692e8cd9b4: Gained IPv6LL Dec 13 01:57:09.310843 containerd[1546]: time="2024-12-13T01:57:09.310756935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:09.310843 containerd[1546]: time="2024-12-13T01:57:09.310829605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:09.310843 containerd[1546]: time="2024-12-13T01:57:09.310841246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:09.311160 containerd[1546]: time="2024-12-13T01:57:09.310978981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:09.323469 systemd[1]: Started cri-containerd-c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea.scope - libcontainer container c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea. Dec 13 01:57:09.331305 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:57:09.351028 containerd[1546]: time="2024-12-13T01:57:09.351000338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpjth,Uid:a6e8c90c-6e1c-4e5f-a197-06bf87bcca01,Namespace:kube-system,Attempt:1,} returns sandbox id \"c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea\"" Dec 13 01:57:09.354049 containerd[1546]: time="2024-12-13T01:57:09.353521917Z" level=info msg="CreateContainer within sandbox \"c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:57:09.368773 containerd[1546]: time="2024-12-13T01:57:09.368744073Z" level=info msg="CreateContainer within sandbox \"c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"386a28a34fd50f044076abce64e04d67263d9f560068f316d6f03ca51b6a5a51\"" Dec 13 01:57:09.369199 containerd[1546]: time="2024-12-13T01:57:09.369172455Z" level=info msg="StartContainer for \"386a28a34fd50f044076abce64e04d67263d9f560068f316d6f03ca51b6a5a51\"" Dec 13 01:57:09.385479 systemd[1]: Started cri-containerd-386a28a34fd50f044076abce64e04d67263d9f560068f316d6f03ca51b6a5a51.scope - libcontainer container 386a28a34fd50f044076abce64e04d67263d9f560068f316d6f03ca51b6a5a51. Dec 13 01:57:09.417804 kubelet[2805]: I1213 01:57:09.417681 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54b9d6d844-bx8kz" podStartSLOduration=24.474838816 podStartE2EDuration="26.417670527s" podCreationTimestamp="2024-12-13 01:56:43 +0000 UTC" firstStartedPulling="2024-12-13 01:57:06.680198798 +0000 UTC m=+45.649521369" lastFinishedPulling="2024-12-13 01:57:08.6230305 +0000 UTC m=+47.592353080" observedRunningTime="2024-12-13 01:57:09.403675657 +0000 UTC m=+48.372998233" watchObservedRunningTime="2024-12-13 01:57:09.417670527 +0000 UTC m=+48.386993109" Dec 13 01:57:09.422744 containerd[1546]: time="2024-12-13T01:57:09.422725185Z" level=info msg="StartContainer for \"386a28a34fd50f044076abce64e04d67263d9f560068f316d6f03ca51b6a5a51\" returns successfully" Dec 13 01:57:10.011419 containerd[1546]: time="2024-12-13T01:57:10.010959031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:10.011803 containerd[1546]: time="2024-12-13T01:57:10.011635278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:57:10.011920 containerd[1546]: time="2024-12-13T01:57:10.011907580Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:10.013108 containerd[1546]: time="2024-12-13T01:57:10.013096127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:10.013683 containerd[1546]: time="2024-12-13T01:57:10.013662065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.390565415s" Dec 13 01:57:10.013683 containerd[1546]: time="2024-12-13T01:57:10.013681692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:57:10.014551 containerd[1546]: time="2024-12-13T01:57:10.014496059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:57:10.015856 containerd[1546]: time="2024-12-13T01:57:10.015344354Z" level=info msg="CreateContainer within sandbox \"94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:57:10.023571 containerd[1546]: time="2024-12-13T01:57:10.023547762Z" level=info msg="CreateContainer within sandbox \"94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f67fd4444e9d4a24c3354281c3452e7ab79fcceb436bd7d9d042e9d5d38223ea\"" Dec 13 01:57:10.024563 containerd[1546]: time="2024-12-13T01:57:10.023941241Z" level=info msg="StartContainer for \"f67fd4444e9d4a24c3354281c3452e7ab79fcceb436bd7d9d042e9d5d38223ea\"" Dec 13 01:57:10.026558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629564371.mount: Deactivated successfully. Dec 13 01:57:10.063525 systemd[1]: Started cri-containerd-f67fd4444e9d4a24c3354281c3452e7ab79fcceb436bd7d9d042e9d5d38223ea.scope - libcontainer container f67fd4444e9d4a24c3354281c3452e7ab79fcceb436bd7d9d042e9d5d38223ea. Dec 13 01:57:10.084419 containerd[1546]: time="2024-12-13T01:57:10.084212992Z" level=info msg="StartContainer for \"f67fd4444e9d4a24c3354281c3452e7ab79fcceb436bd7d9d042e9d5d38223ea\" returns successfully" Dec 13 01:57:10.121571 containerd[1546]: time="2024-12-13T01:57:10.121532860Z" level=info msg="StopPodSandbox for \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\"" Dec 13 01:57:10.121921 containerd[1546]: time="2024-12-13T01:57:10.121541767Z" level=info msg="StopPodSandbox for \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\"" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.169 [INFO][4693] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.170 [INFO][4693] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" iface="eth0" netns="/var/run/netns/cni-a8e17503-9e96-6f72-e2db-da39ecfad41f" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.170 [INFO][4693] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" iface="eth0" netns="/var/run/netns/cni-a8e17503-9e96-6f72-e2db-da39ecfad41f" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.170 [INFO][4693] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" iface="eth0" netns="/var/run/netns/cni-a8e17503-9e96-6f72-e2db-da39ecfad41f" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.170 [INFO][4693] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.170 [INFO][4693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.192 [INFO][4705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" HandleID="k8s-pod-network.4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.192 [INFO][4705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.192 [INFO][4705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.196 [WARNING][4705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" HandleID="k8s-pod-network.4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.196 [INFO][4705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" HandleID="k8s-pod-network.4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.198 [INFO][4705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:10.202243 containerd[1546]: 2024-12-13 01:57:10.200 [INFO][4693] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:10.203808 containerd[1546]: time="2024-12-13T01:57:10.202594487Z" level=info msg="TearDown network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\" successfully" Dec 13 01:57:10.203808 containerd[1546]: time="2024-12-13T01:57:10.202713498Z" level=info msg="StopPodSandbox for \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\" returns successfully" Dec 13 01:57:10.203808 containerd[1546]: time="2024-12-13T01:57:10.203441076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745cd949dc-994lc,Uid:2a968e95-a7f1-4b6c-9c03-61456caad8ed,Namespace:calico-system,Attempt:1,}" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.180 [INFO][4694] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.180 [INFO][4694] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" iface="eth0" netns="/var/run/netns/cni-234d9079-5c1c-6047-c89c-ec86e4609782" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.181 [INFO][4694] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" iface="eth0" netns="/var/run/netns/cni-234d9079-5c1c-6047-c89c-ec86e4609782" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.181 [INFO][4694] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" iface="eth0" netns="/var/run/netns/cni-234d9079-5c1c-6047-c89c-ec86e4609782" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.181 [INFO][4694] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.181 [INFO][4694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.203 [INFO][4709] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" HandleID="k8s-pod-network.34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.203 [INFO][4709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.203 [INFO][4709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.208 [WARNING][4709] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" HandleID="k8s-pod-network.34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.209 [INFO][4709] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" HandleID="k8s-pod-network.34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.211 [INFO][4709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:10.213726 containerd[1546]: 2024-12-13 01:57:10.212 [INFO][4694] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:10.214810 containerd[1546]: time="2024-12-13T01:57:10.213853012Z" level=info msg="TearDown network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\" successfully" Dec 13 01:57:10.214810 containerd[1546]: time="2024-12-13T01:57:10.213870703Z" level=info msg="StopPodSandbox for \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\" returns successfully" Dec 13 01:57:10.214810 containerd[1546]: time="2024-12-13T01:57:10.214537393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-httmm,Uid:5b46a7a9-9e2f-4592-85a2-7b01c18de070,Namespace:kube-system,Attempt:1,}" Dec 13 01:57:10.291787 systemd-networkd[1448]: cali666a56bb7e0: Link UP Dec 13 01:57:10.292611 systemd-networkd[1448]: cali666a56bb7e0: Gained carrier Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.240 [INFO][4718] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0 calico-kube-controllers-745cd949dc- calico-system 2a968e95-a7f1-4b6c-9c03-61456caad8ed 796 0 2024-12-13 01:56:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:745cd949dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-745cd949dc-994lc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali666a56bb7e0 [] []}} ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Namespace="calico-system" Pod="calico-kube-controllers-745cd949dc-994lc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.240 [INFO][4718] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Namespace="calico-system" Pod="calico-kube-controllers-745cd949dc-994lc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.263 [INFO][4742] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" HandleID="k8s-pod-network.15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.271 [INFO][4742] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" HandleID="k8s-pod-network.15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000518e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-745cd949dc-994lc", "timestamp":"2024-12-13 01:57:10.26353571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.272 [INFO][4742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.272 [INFO][4742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.272 [INFO][4742] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.272 [INFO][4742] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" host="localhost" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.276 [INFO][4742] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.278 [INFO][4742] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.279 [INFO][4742] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.280 [INFO][4742] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.280 [INFO][4742] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" host="localhost" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.281 [INFO][4742] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888 Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.283 [INFO][4742] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" host="localhost" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.287 [INFO][4742] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" host="localhost" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.287 [INFO][4742] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" host="localhost" Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.287 [INFO][4742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:10.305431 containerd[1546]: 2024-12-13 01:57:10.287 [INFO][4742] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" HandleID="k8s-pod-network.15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.307442 containerd[1546]: 2024-12-13 01:57:10.288 [INFO][4718] cni-plugin/k8s.go 386: Populated endpoint ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Namespace="calico-system" Pod="calico-kube-controllers-745cd949dc-994lc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0", GenerateName:"calico-kube-controllers-745cd949dc-", Namespace:"calico-system", SelfLink:"", UID:"2a968e95-a7f1-4b6c-9c03-61456caad8ed", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745cd949dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-745cd949dc-994lc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali666a56bb7e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:10.307442 containerd[1546]: 2024-12-13 01:57:10.288 [INFO][4718] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Namespace="calico-system" Pod="calico-kube-controllers-745cd949dc-994lc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.307442 containerd[1546]: 2024-12-13 01:57:10.288 [INFO][4718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali666a56bb7e0 ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Namespace="calico-system" Pod="calico-kube-controllers-745cd949dc-994lc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.307442 containerd[1546]: 2024-12-13 01:57:10.293 [INFO][4718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Namespace="calico-system" Pod="calico-kube-controllers-745cd949dc-994lc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.307442 containerd[1546]: 2024-12-13 01:57:10.293 [INFO][4718] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Namespace="calico-system" Pod="calico-kube-controllers-745cd949dc-994lc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0", GenerateName:"calico-kube-controllers-745cd949dc-", Namespace:"calico-system", SelfLink:"", UID:"2a968e95-a7f1-4b6c-9c03-61456caad8ed", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745cd949dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888", Pod:"calico-kube-controllers-745cd949dc-994lc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali666a56bb7e0", MAC:"e6:40:81:08:8d:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:10.307442 containerd[1546]: 2024-12-13 01:57:10.300 [INFO][4718] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888" Namespace="calico-system" Pod="calico-kube-controllers-745cd949dc-994lc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:10.326022 systemd-networkd[1448]: cali85fe66bcc4f: Link UP Dec 13 01:57:10.326123 systemd-networkd[1448]: cali85fe66bcc4f: Gained carrier Dec 13 01:57:10.337218 containerd[1546]: time="2024-12-13T01:57:10.337000494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:10.337218 containerd[1546]: time="2024-12-13T01:57:10.337032789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:10.337218 containerd[1546]: time="2024-12-13T01:57:10.337039634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:10.337218 containerd[1546]: time="2024-12-13T01:57:10.337096749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.247 [INFO][4727] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--httmm-eth0 coredns-7db6d8ff4d- kube-system 5b46a7a9-9e2f-4592-85a2-7b01c18de070 797 0 2024-12-13 01:56:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-httmm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85fe66bcc4f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Namespace="kube-system" Pod="coredns-7db6d8ff4d-httmm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--httmm-" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.247 [INFO][4727] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Namespace="kube-system" Pod="coredns-7db6d8ff4d-httmm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.268 [INFO][4746] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" HandleID="k8s-pod-network.4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.274 [INFO][4746] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" HandleID="k8s-pod-network.4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384710), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-httmm", "timestamp":"2024-12-13 01:57:10.268671529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.274 [INFO][4746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.287 [INFO][4746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.287 [INFO][4746] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.290 [INFO][4746] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" host="localhost" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.295 [INFO][4746] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.302 [INFO][4746] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.305 [INFO][4746] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.306 [INFO][4746] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.306 [INFO][4746] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" host="localhost" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.308 [INFO][4746] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677 Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.310 [INFO][4746] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" host="localhost" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.316 [INFO][4746] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" host="localhost" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.316 [INFO][4746] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" host="localhost" Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.316 [INFO][4746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:10.338349 containerd[1546]: 2024-12-13 01:57:10.316 [INFO][4746] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" HandleID="k8s-pod-network.4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.338832 containerd[1546]: 2024-12-13 01:57:10.319 [INFO][4727] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Namespace="kube-system" Pod="coredns-7db6d8ff4d-httmm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--httmm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5b46a7a9-9e2f-4592-85a2-7b01c18de070", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-httmm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85fe66bcc4f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:10.338832 containerd[1546]: 2024-12-13 01:57:10.319 [INFO][4727] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Namespace="kube-system" Pod="coredns-7db6d8ff4d-httmm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.338832 containerd[1546]: 2024-12-13 01:57:10.319 [INFO][4727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85fe66bcc4f ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Namespace="kube-system" Pod="coredns-7db6d8ff4d-httmm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.338832 containerd[1546]: 2024-12-13 01:57:10.323 [INFO][4727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Namespace="kube-system" Pod="coredns-7db6d8ff4d-httmm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.338832 containerd[1546]: 2024-12-13 01:57:10.323 [INFO][4727] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Namespace="kube-system" Pod="coredns-7db6d8ff4d-httmm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--httmm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5b46a7a9-9e2f-4592-85a2-7b01c18de070", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677", Pod:"coredns-7db6d8ff4d-httmm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85fe66bcc4f", MAC:"1a:9c:0e:41:04:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:10.338832 containerd[1546]: 2024-12-13 01:57:10.336 [INFO][4727] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677" Namespace="kube-system" Pod="coredns-7db6d8ff4d-httmm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:10.356480 systemd[1]: Started cri-containerd-15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888.scope - libcontainer container 15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888. Dec 13 01:57:10.365014 containerd[1546]: time="2024-12-13T01:57:10.364198443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:10.365014 containerd[1546]: time="2024-12-13T01:57:10.364368339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:10.365014 containerd[1546]: time="2024-12-13T01:57:10.364420782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:10.365714 containerd[1546]: time="2024-12-13T01:57:10.365481561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:10.368003 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:57:10.379471 systemd[1]: Started cri-containerd-4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677.scope - libcontainer container 4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677. Dec 13 01:57:10.390728 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:57:10.401047 containerd[1546]: time="2024-12-13T01:57:10.400793659Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:10.403411 containerd[1546]: time="2024-12-13T01:57:10.401632831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:57:10.404531 containerd[1546]: time="2024-12-13T01:57:10.404512344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 389.999497ms" Dec 13 01:57:10.404572 containerd[1546]: time="2024-12-13T01:57:10.404537596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:57:10.405090 containerd[1546]: time="2024-12-13T01:57:10.405023347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745cd949dc-994lc,Uid:2a968e95-a7f1-4b6c-9c03-61456caad8ed,Namespace:calico-system,Attempt:1,} returns sandbox id \"15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888\"" Dec 13 01:57:10.408269 containerd[1546]: time="2024-12-13T01:57:10.406384212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:57:10.413354 kubelet[2805]: I1213 01:57:10.413112 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xpjth" podStartSLOduration=33.413101612 podStartE2EDuration="33.413101612s" podCreationTimestamp="2024-12-13 01:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:10.413068313 +0000 UTC m=+49.382390894" watchObservedRunningTime="2024-12-13 01:57:10.413101612 +0000 UTC m=+49.382424188" Dec 13 01:57:10.414535 containerd[1546]: time="2024-12-13T01:57:10.414515098Z" level=info msg="CreateContainer within sandbox \"796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:57:10.426719 containerd[1546]: time="2024-12-13T01:57:10.426693145Z" level=info msg="CreateContainer within sandbox \"796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aa624287280e9a2b50ba14b922b705ba83cb70ce0bdb504e005815fa13839a77\"" Dec 13 01:57:10.427682 containerd[1546]: time="2024-12-13T01:57:10.427042747Z" level=info msg="StartContainer for \"aa624287280e9a2b50ba14b922b705ba83cb70ce0bdb504e005815fa13839a77\"" Dec 13 01:57:10.444093 containerd[1546]: time="2024-12-13T01:57:10.444073111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-httmm,Uid:5b46a7a9-9e2f-4592-85a2-7b01c18de070,Namespace:kube-system,Attempt:1,} returns sandbox id \"4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677\"" Dec 13 01:57:10.446277 containerd[1546]: time="2024-12-13T01:57:10.446259145Z" level=info msg="CreateContainer within sandbox \"4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:57:10.459428 containerd[1546]: time="2024-12-13T01:57:10.459383858Z" level=info msg="CreateContainer within sandbox \"4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19c1d5cced0fff8a467acfa437d67ee18e2c8f09003e5276affeb25751ed71b4\"" Dec 13 01:57:10.460227 containerd[1546]: time="2024-12-13T01:57:10.459726892Z" level=info msg="StartContainer for \"19c1d5cced0fff8a467acfa437d67ee18e2c8f09003e5276affeb25751ed71b4\"" Dec 13 01:57:10.460486 systemd[1]: Started cri-containerd-aa624287280e9a2b50ba14b922b705ba83cb70ce0bdb504e005815fa13839a77.scope - libcontainer container aa624287280e9a2b50ba14b922b705ba83cb70ce0bdb504e005815fa13839a77. Dec 13 01:57:10.512487 systemd[1]: Started cri-containerd-19c1d5cced0fff8a467acfa437d67ee18e2c8f09003e5276affeb25751ed71b4.scope - libcontainer container 19c1d5cced0fff8a467acfa437d67ee18e2c8f09003e5276affeb25751ed71b4. Dec 13 01:57:10.539976 containerd[1546]: time="2024-12-13T01:57:10.539945684Z" level=info msg="StartContainer for \"19c1d5cced0fff8a467acfa437d67ee18e2c8f09003e5276affeb25751ed71b4\" returns successfully" Dec 13 01:57:10.540653 containerd[1546]: time="2024-12-13T01:57:10.539961060Z" level=info msg="StartContainer for \"aa624287280e9a2b50ba14b922b705ba83cb70ce0bdb504e005815fa13839a77\" returns successfully" Dec 13 01:57:10.634344 systemd[1]: run-netns-cni\x2d234d9079\x2d5c1c\x2d6047\x2dc89c\x2dec86e4609782.mount: Deactivated successfully. Dec 13 01:57:10.634406 systemd[1]: run-netns-cni\x2da8e17503\x2d9e96\x2d6f72\x2de2db\x2dda39ecfad41f.mount: Deactivated successfully. Dec 13 01:57:11.279564 systemd-networkd[1448]: cali9e69d4c9afe: Gained IPv6LL Dec 13 01:57:11.435222 kubelet[2805]: I1213 01:57:11.435185 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54b9d6d844-snjl9" podStartSLOduration=25.44337755 podStartE2EDuration="28.435171833s" podCreationTimestamp="2024-12-13 01:56:43 +0000 UTC" firstStartedPulling="2024-12-13 01:57:07.413804364 +0000 UTC m=+46.383126936" lastFinishedPulling="2024-12-13 01:57:10.405598647 +0000 UTC m=+49.374921219" observedRunningTime="2024-12-13 01:57:11.413468108 +0000 UTC m=+50.382790698" watchObservedRunningTime="2024-12-13 01:57:11.435171833 +0000 UTC m=+50.404494410" Dec 13 01:57:11.518037 kubelet[2805]: I1213 01:57:11.518002 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-httmm" podStartSLOduration=34.517989661 podStartE2EDuration="34.517989661s" podCreationTimestamp="2024-12-13 01:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:11.439464706 +0000 UTC m=+50.408787286" watchObservedRunningTime="2024-12-13 01:57:11.517989661 +0000 UTC m=+50.487312237" Dec 13 01:57:11.896119 containerd[1546]: time="2024-12-13T01:57:11.896091722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:11.897006 containerd[1546]: time="2024-12-13T01:57:11.896983777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:57:11.897659 containerd[1546]: time="2024-12-13T01:57:11.897331466Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:11.900365 containerd[1546]: time="2024-12-13T01:57:11.900326068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:11.901016 containerd[1546]: time="2024-12-13T01:57:11.900730618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.494277122s" Dec 13 01:57:11.901016 containerd[1546]: time="2024-12-13T01:57:11.900749045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:57:11.901485 containerd[1546]: time="2024-12-13T01:57:11.901413402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:57:11.902419 containerd[1546]: time="2024-12-13T01:57:11.902312002Z" level=info msg="CreateContainer within sandbox \"94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:57:11.917152 containerd[1546]: time="2024-12-13T01:57:11.917124513Z" level=info msg="CreateContainer within sandbox \"94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"624bbebf98bd46d73ce32e5051b9cb5e545833987d9c0512d3f8c332770c3a8d\"" Dec 13 01:57:11.917690 containerd[1546]: time="2024-12-13T01:57:11.917574540Z" level=info msg="StartContainer for \"624bbebf98bd46d73ce32e5051b9cb5e545833987d9c0512d3f8c332770c3a8d\"" Dec 13 01:57:11.919580 systemd-networkd[1448]: cali85fe66bcc4f: Gained IPv6LL Dec 13 01:57:11.947493 systemd[1]: Started cri-containerd-624bbebf98bd46d73ce32e5051b9cb5e545833987d9c0512d3f8c332770c3a8d.scope - libcontainer container 624bbebf98bd46d73ce32e5051b9cb5e545833987d9c0512d3f8c332770c3a8d. Dec 13 01:57:11.964623 containerd[1546]: time="2024-12-13T01:57:11.964522230Z" level=info msg="StartContainer for \"624bbebf98bd46d73ce32e5051b9cb5e545833987d9c0512d3f8c332770c3a8d\" returns successfully" Dec 13 01:57:12.111491 systemd-networkd[1448]: cali666a56bb7e0: Gained IPv6LL Dec 13 01:57:12.314481 kubelet[2805]: I1213 01:57:12.314405 2805 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:57:12.316755 kubelet[2805]: I1213 01:57:12.316740 2805 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:57:12.408747 kubelet[2805]: I1213 01:57:12.408696 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pdngg" podStartSLOduration=24.890457178 podStartE2EDuration="29.408685686s" podCreationTimestamp="2024-12-13 01:56:43 +0000 UTC" firstStartedPulling="2024-12-13 01:57:07.382924288 +0000 UTC m=+46.352246860" lastFinishedPulling="2024-12-13 01:57:11.901152791 +0000 UTC m=+50.870475368" observedRunningTime="2024-12-13 01:57:12.408483347 +0000 UTC m=+51.377805927" watchObservedRunningTime="2024-12-13 01:57:12.408685686 +0000 UTC m=+51.378008261" Dec 13 01:57:15.848852 containerd[1546]: time="2024-12-13T01:57:15.848817024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:15.857681 containerd[1546]: time="2024-12-13T01:57:15.857614864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:57:15.866439 containerd[1546]: time="2024-12-13T01:57:15.866328272Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:15.881340 containerd[1546]: time="2024-12-13T01:57:15.881299959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:57:15.882010 containerd[1546]: time="2024-12-13T01:57:15.881664595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.980056021s" Dec 13 01:57:15.882010 containerd[1546]: time="2024-12-13T01:57:15.881683998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:57:15.982065 containerd[1546]: time="2024-12-13T01:57:15.981691437Z" level=info msg="CreateContainer within sandbox \"15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:57:15.991472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885920548.mount: Deactivated successfully. Dec 13 01:57:15.993240 containerd[1546]: time="2024-12-13T01:57:15.993218080Z" level=info msg="CreateContainer within sandbox \"15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e32f09fee0b125b6665d2060c96d37d5f5e516222d18d084a6d43180e31bb254\"" Dec 13 01:57:15.994373 containerd[1546]: time="2024-12-13T01:57:15.994297956Z" level=info msg="StartContainer for \"e32f09fee0b125b6665d2060c96d37d5f5e516222d18d084a6d43180e31bb254\"" Dec 13 01:57:16.013496 systemd[1]: Started cri-containerd-e32f09fee0b125b6665d2060c96d37d5f5e516222d18d084a6d43180e31bb254.scope - libcontainer container e32f09fee0b125b6665d2060c96d37d5f5e516222d18d084a6d43180e31bb254. Dec 13 01:57:16.049145 containerd[1546]: time="2024-12-13T01:57:16.049041508Z" level=info msg="StartContainer for \"e32f09fee0b125b6665d2060c96d37d5f5e516222d18d084a6d43180e31bb254\" returns successfully" Dec 13 01:57:16.420084 kubelet[2805]: I1213 01:57:16.419218 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-745cd949dc-994lc" podStartSLOduration=27.943387715 podStartE2EDuration="33.419205181s" podCreationTimestamp="2024-12-13 01:56:43 +0000 UTC" firstStartedPulling="2024-12-13 01:57:10.406307549 +0000 UTC m=+49.375630120" lastFinishedPulling="2024-12-13 01:57:15.882125014 +0000 UTC m=+54.851447586" observedRunningTime="2024-12-13 01:57:16.418836521 +0000 UTC m=+55.388159101" watchObservedRunningTime="2024-12-13 01:57:16.419205181 +0000 UTC m=+55.388527756" Dec 13 01:57:21.193467 containerd[1546]: time="2024-12-13T01:57:21.193329214Z" level=info msg="StopPodSandbox for \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\"" Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.322 [WARNING][5084] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0", GenerateName:"calico-kube-controllers-745cd949dc-", Namespace:"calico-system", SelfLink:"", UID:"2a968e95-a7f1-4b6c-9c03-61456caad8ed", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745cd949dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888", Pod:"calico-kube-controllers-745cd949dc-994lc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali666a56bb7e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.324 [INFO][5084] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.324 [INFO][5084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" iface="eth0" netns="" Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.324 [INFO][5084] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.324 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.345 [INFO][5090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" HandleID="k8s-pod-network.4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.346 [INFO][5090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.346 [INFO][5090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.352 [WARNING][5090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" HandleID="k8s-pod-network.4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.352 [INFO][5090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" HandleID="k8s-pod-network.4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.353 [INFO][5090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.358284 containerd[1546]: 2024-12-13 01:57:21.356 [INFO][5084] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:21.358284 containerd[1546]: time="2024-12-13T01:57:21.358143411Z" level=info msg="TearDown network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\" successfully" Dec 13 01:57:21.358284 containerd[1546]: time="2024-12-13T01:57:21.358177296Z" level=info msg="StopPodSandbox for \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\" returns successfully" Dec 13 01:57:21.390449 containerd[1546]: time="2024-12-13T01:57:21.390427568Z" level=info msg="RemovePodSandbox for \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\"" Dec 13 01:57:21.390500 containerd[1546]: time="2024-12-13T01:57:21.390457322Z" level=info msg="Forcibly stopping sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\"" Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.419 [WARNING][5108] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0", GenerateName:"calico-kube-controllers-745cd949dc-", Namespace:"calico-system", SelfLink:"", UID:"2a968e95-a7f1-4b6c-9c03-61456caad8ed", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745cd949dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15c3fd2fa5c4b770b103a7483d167d4aefe1b4918fba7caa9e524c5094677888", Pod:"calico-kube-controllers-745cd949dc-994lc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali666a56bb7e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.419 [INFO][5108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.419 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" iface="eth0" netns="" Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.419 [INFO][5108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.419 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.431 [INFO][5114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" HandleID="k8s-pod-network.4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.431 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.431 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.437 [WARNING][5114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" HandleID="k8s-pod-network.4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.437 [INFO][5114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" HandleID="k8s-pod-network.4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Workload="localhost-k8s-calico--kube--controllers--745cd949dc--994lc-eth0" Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.438 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.441143 containerd[1546]: 2024-12-13 01:57:21.439 [INFO][5108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9" Dec 13 01:57:21.442660 containerd[1546]: time="2024-12-13T01:57:21.441157291Z" level=info msg="TearDown network for sandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\" successfully" Dec 13 01:57:21.453158 systemd[1]: run-containerd-runc-k8s.io-9a74cdda4831310496da2884b1555e4d835686129b7fdfee31c8f108e55c5acd-runc.rAbR94.mount: Deactivated successfully. Dec 13 01:57:21.458712 containerd[1546]: time="2024-12-13T01:57:21.458666860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:21.464616 containerd[1546]: time="2024-12-13T01:57:21.464590954Z" level=info msg="RemovePodSandbox \"4ad2a144f7620d8413a8e1b402b5cc774756c408886def2163bd1850aeaaa4f9\" returns successfully" Dec 13 01:57:21.465311 containerd[1546]: time="2024-12-13T01:57:21.465296223Z" level=info msg="StopPodSandbox for \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\"" Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.493 [WARNING][5152] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0", GenerateName:"calico-apiserver-54b9d6d844-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3155228-6a82-4b05-aa2f-1efe3f581565", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b9d6d844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b", Pod:"calico-apiserver-54b9d6d844-snjl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1692e8cd9b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.493 [INFO][5152] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.493 [INFO][5152] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" iface="eth0" netns="" Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.493 [INFO][5152] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.493 [INFO][5152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.515 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" HandleID="k8s-pod-network.55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.515 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.515 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.518 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" HandleID="k8s-pod-network.55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.519 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" HandleID="k8s-pod-network.55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.520 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.523384 containerd[1546]: 2024-12-13 01:57:21.522 [INFO][5152] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:21.523384 containerd[1546]: time="2024-12-13T01:57:21.523278410Z" level=info msg="TearDown network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\" successfully" Dec 13 01:57:21.523384 containerd[1546]: time="2024-12-13T01:57:21.523308205Z" level=info msg="StopPodSandbox for \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\" returns successfully" Dec 13 01:57:21.524954 containerd[1546]: time="2024-12-13T01:57:21.524064368Z" level=info msg="RemovePodSandbox for \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\"" Dec 13 01:57:21.524954 containerd[1546]: time="2024-12-13T01:57:21.524085705Z" level=info msg="Forcibly stopping sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\"" Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.555 [WARNING][5178] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0", GenerateName:"calico-apiserver-54b9d6d844-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3155228-6a82-4b05-aa2f-1efe3f581565", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b9d6d844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"796b5dd2a211ce1d3eb0f8a5837d5d73a38a9b847196bef6d3692bc56efd786b", Pod:"calico-apiserver-54b9d6d844-snjl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1692e8cd9b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.555 [INFO][5178] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.555 [INFO][5178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" iface="eth0" netns="" Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.555 [INFO][5178] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.555 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.571 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" HandleID="k8s-pod-network.55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.571 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.571 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.574 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" HandleID="k8s-pod-network.55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.574 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" HandleID="k8s-pod-network.55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Workload="localhost-k8s-calico--apiserver--54b9d6d844--snjl9-eth0" Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.575 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.577200 containerd[1546]: 2024-12-13 01:57:21.576 [INFO][5178] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf" Dec 13 01:57:21.578014 containerd[1546]: time="2024-12-13T01:57:21.577220696Z" level=info msg="TearDown network for sandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\" successfully" Dec 13 01:57:21.579887 containerd[1546]: time="2024-12-13T01:57:21.579869682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:21.579936 containerd[1546]: time="2024-12-13T01:57:21.579921304Z" level=info msg="RemovePodSandbox \"55a0234de0fdb080570399982d71f3c488a55d245bb5aa7d3b2f4c55846fe2cf\" returns successfully" Dec 13 01:57:21.580357 containerd[1546]: time="2024-12-13T01:57:21.580342157Z" level=info msg="StopPodSandbox for \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\"" Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.601 [WARNING][5202] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pdngg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45fb1911-1ccb-4174-8fae-ff2967d97276", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e", Pod:"csi-node-driver-pdngg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d56795af7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.602 [INFO][5202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.602 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" iface="eth0" netns="" Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.602 [INFO][5202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.602 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.615 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" HandleID="k8s-pod-network.95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.615 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.615 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.619 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" HandleID="k8s-pod-network.95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.619 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" HandleID="k8s-pod-network.95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.620 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.622028 containerd[1546]: 2024-12-13 01:57:21.621 [INFO][5202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:21.622827 containerd[1546]: time="2024-12-13T01:57:21.622047606Z" level=info msg="TearDown network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\" successfully" Dec 13 01:57:21.622827 containerd[1546]: time="2024-12-13T01:57:21.622079060Z" level=info msg="StopPodSandbox for \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\" returns successfully" Dec 13 01:57:21.622827 containerd[1546]: time="2024-12-13T01:57:21.622346251Z" level=info msg="RemovePodSandbox for \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\"" Dec 13 01:57:21.622827 containerd[1546]: time="2024-12-13T01:57:21.622361387Z" level=info msg="Forcibly stopping sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\"" Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.645 [WARNING][5226] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pdngg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45fb1911-1ccb-4174-8fae-ff2967d97276", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94e3269db44fa236c54e42d849470fea3698e25dd75614aa93b1a7fd07331c2e", Pod:"csi-node-driver-pdngg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d56795af7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.645 [INFO][5226] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.645 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" iface="eth0" netns="" Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.645 [INFO][5226] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.645 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.663 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" HandleID="k8s-pod-network.95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.663 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.663 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.666 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" HandleID="k8s-pod-network.95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.666 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" HandleID="k8s-pod-network.95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Workload="localhost-k8s-csi--node--driver--pdngg-eth0" Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.667 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.669428 containerd[1546]: 2024-12-13 01:57:21.668 [INFO][5226] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58" Dec 13 01:57:21.669827 containerd[1546]: time="2024-12-13T01:57:21.669449157Z" level=info msg="TearDown network for sandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\" successfully" Dec 13 01:57:21.671459 containerd[1546]: time="2024-12-13T01:57:21.671442641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:21.671506 containerd[1546]: time="2024-12-13T01:57:21.671473078Z" level=info msg="RemovePodSandbox \"95bde1d7ff08ea4a6d01e5c0a61c1297cc39eeb9cd43a0387c4c3456c1c05d58\" returns successfully" Dec 13 01:57:21.671916 containerd[1546]: time="2024-12-13T01:57:21.671797143Z" level=info msg="StopPodSandbox for \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\"" Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.692 [WARNING][5251] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0", GenerateName:"calico-apiserver-54b9d6d844-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad033622-c6e5-4ed2-aa0b-b2adc8bb3378", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b9d6d844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb", Pod:"calico-apiserver-54b9d6d844-bx8kz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95fc02eb710", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.693 [INFO][5251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.693 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" iface="eth0" netns="" Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.693 [INFO][5251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.693 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.709 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" HandleID="k8s-pod-network.52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.709 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.709 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.713 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" HandleID="k8s-pod-network.52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.713 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" HandleID="k8s-pod-network.52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.714 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.716469 containerd[1546]: 2024-12-13 01:57:21.715 [INFO][5251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:21.716469 containerd[1546]: time="2024-12-13T01:57:21.716203100Z" level=info msg="TearDown network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\" successfully" Dec 13 01:57:21.716469 containerd[1546]: time="2024-12-13T01:57:21.716218417Z" level=info msg="StopPodSandbox for \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\" returns successfully" Dec 13 01:57:21.717045 containerd[1546]: time="2024-12-13T01:57:21.716536141Z" level=info msg="RemovePodSandbox for \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\"" Dec 13 01:57:21.717045 containerd[1546]: time="2024-12-13T01:57:21.716551340Z" level=info msg="Forcibly stopping sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\"" Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.738 [WARNING][5275] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0", GenerateName:"calico-apiserver-54b9d6d844-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad033622-c6e5-4ed2-aa0b-b2adc8bb3378", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54b9d6d844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db86860c82393ebd192d318cc415813a465cc0f3c4d49221cb35563dc33859fb", Pod:"calico-apiserver-54b9d6d844-bx8kz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95fc02eb710", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.738 [INFO][5275] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.738 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" iface="eth0" netns="" Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.738 [INFO][5275] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.738 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.750 [INFO][5281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" HandleID="k8s-pod-network.52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.750 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.750 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.753 [WARNING][5281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" HandleID="k8s-pod-network.52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.753 [INFO][5281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" HandleID="k8s-pod-network.52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Workload="localhost-k8s-calico--apiserver--54b9d6d844--bx8kz-eth0" Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.754 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.756660 containerd[1546]: 2024-12-13 01:57:21.755 [INFO][5275] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17" Dec 13 01:57:21.756965 containerd[1546]: time="2024-12-13T01:57:21.756682853Z" level=info msg="TearDown network for sandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\" successfully" Dec 13 01:57:21.758649 containerd[1546]: time="2024-12-13T01:57:21.758629106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:21.758707 containerd[1546]: time="2024-12-13T01:57:21.758663417Z" level=info msg="RemovePodSandbox \"52bc80f81924625eaba85858b0580509446802d0c21353381fabea800008ef17\" returns successfully" Dec 13 01:57:21.759104 containerd[1546]: time="2024-12-13T01:57:21.758963731Z" level=info msg="StopPodSandbox for \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\"" Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.780 [WARNING][5299] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a6e8c90c-6e1c-4e5f-a197-06bf87bcca01", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea", Pod:"coredns-7db6d8ff4d-xpjth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e69d4c9afe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.780 [INFO][5299] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.780 [INFO][5299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" iface="eth0" netns="" Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.780 [INFO][5299] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.780 [INFO][5299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.793 [INFO][5305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" HandleID="k8s-pod-network.936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.793 [INFO][5305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.793 [INFO][5305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.797 [WARNING][5305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" HandleID="k8s-pod-network.936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.797 [INFO][5305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" HandleID="k8s-pod-network.936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.797 [INFO][5305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.799884 containerd[1546]: 2024-12-13 01:57:21.798 [INFO][5299] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:21.799884 containerd[1546]: time="2024-12-13T01:57:21.799863047Z" level=info msg="TearDown network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\" successfully" Dec 13 01:57:21.800695 containerd[1546]: time="2024-12-13T01:57:21.799893213Z" level=info msg="StopPodSandbox for \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\" returns successfully" Dec 13 01:57:21.800695 containerd[1546]: time="2024-12-13T01:57:21.800308207Z" level=info msg="RemovePodSandbox for \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\"" Dec 13 01:57:21.800695 containerd[1546]: time="2024-12-13T01:57:21.800324096Z" level=info msg="Forcibly stopping sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\"" Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.827 [WARNING][5323] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a6e8c90c-6e1c-4e5f-a197-06bf87bcca01", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c55869569dd193ba41bfc2b5f7373c1c0e5413a4c8f0f55b1fd631933ab24bea", Pod:"coredns-7db6d8ff4d-xpjth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e69d4c9afe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.827 [INFO][5323] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.827 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" iface="eth0" netns="" Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.827 [INFO][5323] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.827 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.842 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" HandleID="k8s-pod-network.936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.842 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.842 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.845 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" HandleID="k8s-pod-network.936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.845 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" HandleID="k8s-pod-network.936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Workload="localhost-k8s-coredns--7db6d8ff4d--xpjth-eth0" Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.846 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.848129 containerd[1546]: 2024-12-13 01:57:21.847 [INFO][5323] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8" Dec 13 01:57:21.848649 containerd[1546]: time="2024-12-13T01:57:21.848166862Z" level=info msg="TearDown network for sandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\" successfully" Dec 13 01:57:21.850004 containerd[1546]: time="2024-12-13T01:57:21.849987472Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:21.850049 containerd[1546]: time="2024-12-13T01:57:21.850031658Z" level=info msg="RemovePodSandbox \"936caadec58f42dae0fde99157d5f82f5c1895abf84daeb06581db567394f9c8\" returns successfully" Dec 13 01:57:21.850428 containerd[1546]: time="2024-12-13T01:57:21.850357457Z" level=info msg="StopPodSandbox for \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\"" Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.870 [WARNING][5347] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--httmm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5b46a7a9-9e2f-4592-85a2-7b01c18de070", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677", Pod:"coredns-7db6d8ff4d-httmm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85fe66bcc4f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.870 [INFO][5347] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.870 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" iface="eth0" netns="" Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.870 [INFO][5347] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.871 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.884 [INFO][5353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" HandleID="k8s-pod-network.34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.884 [INFO][5353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.884 [INFO][5353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.887 [WARNING][5353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" HandleID="k8s-pod-network.34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.887 [INFO][5353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" HandleID="k8s-pod-network.34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.888 [INFO][5353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.890003 containerd[1546]: 2024-12-13 01:57:21.888 [INFO][5347] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:21.890424 containerd[1546]: time="2024-12-13T01:57:21.890339864Z" level=info msg="TearDown network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\" successfully" Dec 13 01:57:21.890424 containerd[1546]: time="2024-12-13T01:57:21.890355974Z" level=info msg="StopPodSandbox for \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\" returns successfully" Dec 13 01:57:21.890858 containerd[1546]: time="2024-12-13T01:57:21.890693247Z" level=info msg="RemovePodSandbox for \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\"" Dec 13 01:57:21.890858 containerd[1546]: time="2024-12-13T01:57:21.890709090Z" level=info msg="Forcibly stopping sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\"" Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.911 [WARNING][5371] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--httmm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5b46a7a9-9e2f-4592-85a2-7b01c18de070", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d0ddefd2f157521f6d93402683f6c86f3d7e0018fd87e4e8a60d34414a0c677", Pod:"coredns-7db6d8ff4d-httmm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85fe66bcc4f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.911 [INFO][5371] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.911 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" iface="eth0" netns="" Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.911 [INFO][5371] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.911 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.925 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" HandleID="k8s-pod-network.34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.925 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.925 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.929 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" HandleID="k8s-pod-network.34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.929 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" HandleID="k8s-pod-network.34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Workload="localhost-k8s-coredns--7db6d8ff4d--httmm-eth0" Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.929 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:21.931527 containerd[1546]: 2024-12-13 01:57:21.930 [INFO][5371] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823" Dec 13 01:57:21.931527 containerd[1546]: time="2024-12-13T01:57:21.931521498Z" level=info msg="TearDown network for sandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\" successfully" Dec 13 01:57:21.933534 containerd[1546]: time="2024-12-13T01:57:21.933517911Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:21.933795 containerd[1546]: time="2024-12-13T01:57:21.933549147Z" level=info msg="RemovePodSandbox \"34ff5161768343f96599d56cb3e5418faa4f7383a2ff34dec14bed2b8b529823\" returns successfully" Dec 13 01:57:23.909352 systemd[1]: run-containerd-runc-k8s.io-e32f09fee0b125b6665d2060c96d37d5f5e516222d18d084a6d43180e31bb254-runc.VNrbhA.mount: Deactivated successfully. Dec 13 01:57:28.093093 systemd[1]: Started sshd@7-139.178.70.106:22-139.178.89.65:33460.service - OpenSSH per-connection server daemon (139.178.89.65:33460). Dec 13 01:57:28.162192 sshd[5438]: Accepted publickey for core from 139.178.89.65 port 33460 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:57:28.164099 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:28.167880 systemd-logind[1521]: New session 10 of user core. Dec 13 01:57:28.173483 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:57:28.618890 sshd[5438]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:28.621827 systemd[1]: sshd@7-139.178.70.106:22-139.178.89.65:33460.service: Deactivated successfully. Dec 13 01:57:28.622957 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:57:28.623520 systemd-logind[1521]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:57:28.624062 systemd-logind[1521]: Removed session 10. Dec 13 01:57:33.628051 systemd[1]: Started sshd@8-139.178.70.106:22-139.178.89.65:33466.service - OpenSSH per-connection server daemon (139.178.89.65:33466). Dec 13 01:57:33.720821 sshd[5456]: Accepted publickey for core from 139.178.89.65 port 33466 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:57:33.721822 sshd[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:33.725020 systemd-logind[1521]: New session 11 of user core. Dec 13 01:57:33.737501 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:57:33.901768 sshd[5456]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:33.904365 systemd[1]: sshd@8-139.178.70.106:22-139.178.89.65:33466.service: Deactivated successfully. Dec 13 01:57:33.907002 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:57:33.907849 systemd-logind[1521]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:57:33.908713 systemd-logind[1521]: Removed session 11. Dec 13 01:57:38.910403 systemd[1]: Started sshd@9-139.178.70.106:22-139.178.89.65:36764.service - OpenSSH per-connection server daemon (139.178.89.65:36764). Dec 13 01:57:38.964393 sshd[5473]: Accepted publickey for core from 139.178.89.65 port 36764 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:57:38.965312 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:38.967649 systemd-logind[1521]: New session 12 of user core. Dec 13 01:57:38.982506 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:57:39.083119 sshd[5473]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:39.087857 systemd[1]: sshd@9-139.178.70.106:22-139.178.89.65:36764.service: Deactivated successfully. Dec 13 01:57:39.089229 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:57:39.090587 systemd-logind[1521]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:57:39.092125 systemd[1]: Started sshd@10-139.178.70.106:22-139.178.89.65:36768.service - OpenSSH per-connection server daemon (139.178.89.65:36768). Dec 13 01:57:39.092811 systemd-logind[1521]: Removed session 12. Dec 13 01:57:39.131734 sshd[5486]: Accepted publickey for core from 139.178.89.65 port 36768 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:57:39.132527 sshd[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:39.134917 systemd-logind[1521]: New session 13 of user core. Dec 13 01:57:39.146567 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:57:39.286101 sshd[5486]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:39.292373 systemd[1]: sshd@10-139.178.70.106:22-139.178.89.65:36768.service: Deactivated successfully. Dec 13 01:57:39.293277 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:57:39.294108 systemd-logind[1521]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:57:39.300239 systemd[1]: Started sshd@11-139.178.70.106:22-139.178.89.65:36782.service - OpenSSH per-connection server daemon (139.178.89.65:36782). Dec 13 01:57:39.302457 systemd-logind[1521]: Removed session 13. Dec 13 01:57:39.341710 sshd[5496]: Accepted publickey for core from 139.178.89.65 port 36782 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:57:39.342549 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:39.345423 systemd-logind[1521]: New session 14 of user core. Dec 13 01:57:39.349481 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:57:39.452068 sshd[5496]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:39.453798 systemd-logind[1521]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:57:39.453983 systemd[1]: sshd@11-139.178.70.106:22-139.178.89.65:36782.service: Deactivated successfully. Dec 13 01:57:39.455002 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:57:39.456098 systemd-logind[1521]: Removed session 14. Dec 13 01:57:44.460625 systemd[1]: Started sshd@12-139.178.70.106:22-139.178.89.65:36796.service - OpenSSH per-connection server daemon (139.178.89.65:36796). Dec 13 01:57:44.491177 sshd[5520]: Accepted publickey for core from 139.178.89.65 port 36796 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:57:44.491912 sshd[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:44.494206 systemd-logind[1521]: New session 15 of user core. Dec 13 01:57:44.500493 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:57:44.613277 sshd[5520]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:44.615321 systemd[1]: sshd@12-139.178.70.106:22-139.178.89.65:36796.service: Deactivated successfully. Dec 13 01:57:44.616363 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:57:44.616791 systemd-logind[1521]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:57:44.617498 systemd-logind[1521]: Removed session 15. Dec 13 01:57:49.622356 systemd[1]: Started sshd@13-139.178.70.106:22-139.178.89.65:41932.service - OpenSSH per-connection server daemon (139.178.89.65:41932). Dec 13 01:57:49.674148 sshd[5535]: Accepted publickey for core from 139.178.89.65 port 41932 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:57:49.675179 sshd[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:49.678750 systemd-logind[1521]: New session 16 of user core. Dec 13 01:57:49.683499 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:57:49.798978 sshd[5535]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:49.801345 systemd[1]: sshd@13-139.178.70.106:22-139.178.89.65:41932.service: Deactivated successfully. Dec 13 01:57:49.801563 systemd-logind[1521]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:57:49.802745 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:57:49.803851 systemd-logind[1521]: Removed session 16. Dec 13 01:57:54.813282 systemd[1]: Started sshd@14-139.178.70.106:22-139.178.89.65:41942.service - OpenSSH per-connection server daemon (139.178.89.65:41942). Dec 13 01:57:54.889470 sshd[5589]: Accepted publickey for core from 139.178.89.65 port 41942 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:57:54.890033 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:54.893233 systemd-logind[1521]: New session 17 of user core. Dec 13 01:57:54.904520 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:57:55.043201 sshd[5589]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:55.046092 systemd[1]: sshd@14-139.178.70.106:22-139.178.89.65:41942.service: Deactivated successfully. Dec 13 01:57:55.048065 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:57:55.048692 systemd-logind[1521]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:57:55.049931 systemd-logind[1521]: Removed session 17. Dec 13 01:58:00.050770 systemd[1]: Started sshd@15-139.178.70.106:22-139.178.89.65:41042.service - OpenSSH per-connection server daemon (139.178.89.65:41042). Dec 13 01:58:00.125138 sshd[5602]: Accepted publickey for core from 139.178.89.65 port 41042 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:58:00.126352 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:58:00.129499 systemd-logind[1521]: New session 18 of user core. Dec 13 01:58:00.133486 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:58:00.252338 sshd[5602]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:00.257610 systemd[1]: sshd@15-139.178.70.106:22-139.178.89.65:41042.service: Deactivated successfully. Dec 13 01:58:00.259154 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:58:00.260070 systemd-logind[1521]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:58:00.264552 systemd[1]: Started sshd@16-139.178.70.106:22-139.178.89.65:41046.service - OpenSSH per-connection server daemon (139.178.89.65:41046). Dec 13 01:58:00.266996 systemd-logind[1521]: Removed session 18. Dec 13 01:58:00.308306 sshd[5615]: Accepted publickey for core from 139.178.89.65 port 41046 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:58:00.309202 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:58:00.312719 systemd-logind[1521]: New session 19 of user core. Dec 13 01:58:00.318500 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:58:00.680095 sshd[5615]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:00.685994 systemd[1]: sshd@16-139.178.70.106:22-139.178.89.65:41046.service: Deactivated successfully. Dec 13 01:58:00.687103 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:58:00.687946 systemd-logind[1521]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:58:00.692786 systemd[1]: Started sshd@17-139.178.70.106:22-139.178.89.65:41050.service - OpenSSH per-connection server daemon (139.178.89.65:41050). Dec 13 01:58:00.693292 systemd-logind[1521]: Removed session 19. Dec 13 01:58:00.808928 sshd[5626]: Accepted publickey for core from 139.178.89.65 port 41050 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:58:00.810032 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:58:00.814047 systemd-logind[1521]: New session 20 of user core. Dec 13 01:58:00.818499 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:58:02.603343 systemd[1]: Started sshd@18-139.178.70.106:22-139.178.89.65:41052.service - OpenSSH per-connection server daemon (139.178.89.65:41052). Dec 13 01:58:02.608026 sshd[5626]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:02.629370 systemd[1]: sshd@17-139.178.70.106:22-139.178.89.65:41050.service: Deactivated successfully. Dec 13 01:58:02.630483 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:58:02.631434 systemd-logind[1521]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:58:02.632667 systemd-logind[1521]: Removed session 20. Dec 13 01:58:02.864198 sshd[5660]: Accepted publickey for core from 139.178.89.65 port 41052 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:58:02.863993 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:58:02.868423 systemd-logind[1521]: New session 21 of user core. Dec 13 01:58:02.874015 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:58:03.293533 sshd[5660]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:03.299192 systemd[1]: sshd@18-139.178.70.106:22-139.178.89.65:41052.service: Deactivated successfully. Dec 13 01:58:03.301278 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:58:03.302711 systemd-logind[1521]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:58:03.310505 systemd[1]: Started sshd@19-139.178.70.106:22-139.178.89.65:41064.service - OpenSSH per-connection server daemon (139.178.89.65:41064). Dec 13 01:58:03.311902 systemd-logind[1521]: Removed session 21. Dec 13 01:58:03.353833 sshd[5677]: Accepted publickey for core from 139.178.89.65 port 41064 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:58:03.354763 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:58:03.357200 systemd-logind[1521]: New session 22 of user core. Dec 13 01:58:03.362467 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:58:03.478197 sshd[5677]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:03.480997 systemd[1]: sshd@19-139.178.70.106:22-139.178.89.65:41064.service: Deactivated successfully. Dec 13 01:58:03.482133 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:58:03.482605 systemd-logind[1521]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:58:03.483144 systemd-logind[1521]: Removed session 22. Dec 13 01:58:08.486546 systemd[1]: Started sshd@20-139.178.70.106:22-139.178.89.65:41010.service - OpenSSH per-connection server daemon (139.178.89.65:41010). Dec 13 01:58:08.515441 sshd[5696]: Accepted publickey for core from 139.178.89.65 port 41010 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:58:08.516146 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:58:08.518589 systemd-logind[1521]: New session 23 of user core. Dec 13 01:58:08.524484 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:58:08.628545 sshd[5696]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:08.630476 systemd-logind[1521]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:58:08.630573 systemd[1]: sshd@20-139.178.70.106:22-139.178.89.65:41010.service: Deactivated successfully. Dec 13 01:58:08.631623 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:58:08.632622 systemd-logind[1521]: Removed session 23. Dec 13 01:58:13.638842 systemd[1]: Started sshd@21-139.178.70.106:22-139.178.89.65:41026.service - OpenSSH per-connection server daemon (139.178.89.65:41026). Dec 13 01:58:13.747141 sshd[5709]: Accepted publickey for core from 139.178.89.65 port 41026 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:58:13.747986 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:58:13.750851 systemd-logind[1521]: New session 24 of user core. Dec 13 01:58:13.757480 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:58:13.944800 sshd[5709]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:13.947168 systemd[1]: sshd@21-139.178.70.106:22-139.178.89.65:41026.service: Deactivated successfully. Dec 13 01:58:13.948772 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:58:13.949270 systemd-logind[1521]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:58:13.949859 systemd-logind[1521]: Removed session 24. Dec 13 01:58:18.954626 systemd[1]: Started sshd@22-139.178.70.106:22-139.178.89.65:51566.service - OpenSSH per-connection server daemon (139.178.89.65:51566). Dec 13 01:58:18.993318 sshd[5743]: Accepted publickey for core from 139.178.89.65 port 51566 ssh2: RSA SHA256:aIxsfnAZV9el3tBC4kYppWPzJqH3H1LgymJV7CJJaCY Dec 13 01:58:18.994252 sshd[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:58:18.996975 systemd-logind[1521]: New session 25 of user core. Dec 13 01:58:19.000487 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:58:19.171819 sshd[5743]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:19.174536 systemd[1]: sshd@22-139.178.70.106:22-139.178.89.65:51566.service: Deactivated successfully. Dec 13 01:58:19.176026 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:58:19.176610 systemd-logind[1521]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:58:19.177220 systemd-logind[1521]: Removed session 25.