Feb 13 20:22:52.742160 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:22:52.742178 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:22:52.742184 kernel: Disabled fast string operations Feb 13 20:22:52.742188 kernel: BIOS-provided physical RAM map: Feb 13 20:22:52.742192 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 13 20:22:52.742206 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 13 20:22:52.742213 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Feb 13 20:22:52.742218 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Feb 13 20:22:52.742222 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Feb 13 20:22:52.742226 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Feb 13 20:22:52.742230 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Feb 13 20:22:52.742234 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Feb 13 20:22:52.742238 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Feb 13 20:22:52.742243 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 20:22:52.742249 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Feb 13 20:22:52.742254 kernel: NX (Execute Disable) protection: active Feb 13 20:22:52.742259 kernel: APIC: Static calls initialized Feb 13 20:22:52.742264 kernel: SMBIOS 2.7 present. Feb 13 20:22:52.742269 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Feb 13 20:22:52.742273 kernel: vmware: hypercall mode: 0x00 Feb 13 20:22:52.742278 kernel: Hypervisor detected: VMware Feb 13 20:22:52.742283 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Feb 13 20:22:52.742288 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Feb 13 20:22:52.742293 kernel: vmware: using clock offset of 2633914133 ns Feb 13 20:22:52.742298 kernel: tsc: Detected 3408.000 MHz processor Feb 13 20:22:52.742303 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:22:52.742308 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:22:52.742313 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Feb 13 20:22:52.742318 kernel: total RAM covered: 3072M Feb 13 20:22:52.742322 kernel: Found optimal setting for mtrr clean up Feb 13 20:22:52.742328 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Feb 13 20:22:52.742334 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Feb 13 20:22:52.742339 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:22:52.742344 kernel: Using GB pages for direct mapping Feb 13 20:22:52.742349 kernel: ACPI: Early table checksum verification disabled Feb 13 20:22:52.742353 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Feb 13 20:22:52.742358 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Feb 13 20:22:52.742363 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Feb 13 20:22:52.742368 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Feb 13 20:22:52.742373 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 13 20:22:52.742381 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 13 20:22:52.742386 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Feb 13 20:22:52.742391 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Feb 13 20:22:52.742396 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Feb 13 20:22:52.742402 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Feb 13 20:22:52.742408 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Feb 13 20:22:52.742413 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Feb 13 20:22:52.742418 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Feb 13 20:22:52.742423 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Feb 13 20:22:52.742428 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 13 20:22:52.742433 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 13 20:22:52.742438 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Feb 13 20:22:52.742444 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Feb 13 20:22:52.742449 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Feb 13 20:22:52.742454 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Feb 13 20:22:52.742460 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Feb 13 20:22:52.742465 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Feb 13 20:22:52.742470 kernel: system APIC only can use physical flat Feb 13 20:22:52.742475 kernel: APIC: Switched APIC routing to: physical flat Feb 13 20:22:52.742480 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:22:52.742485 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 20:22:52.742490 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 20:22:52.742495 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 20:22:52.742500 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 20:22:52.742507 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 20:22:52.742512 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 20:22:52.742517 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 20:22:52.742522 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Feb 13 20:22:52.742527 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Feb 13 20:22:52.742532 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Feb 13 20:22:52.742537 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Feb 13 20:22:52.742542 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Feb 13 20:22:52.742547 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Feb 13 20:22:52.742552 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Feb 13 20:22:52.742557 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Feb 13 20:22:52.742563 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Feb 13 20:22:52.742568 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Feb 13 20:22:52.742573 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Feb 13 20:22:52.742579 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Feb 13 20:22:52.742584 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Feb 13 20:22:52.742588 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Feb 13 20:22:52.742594 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Feb 13 20:22:52.742599 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Feb 13 20:22:52.742604 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Feb 13 20:22:52.742609 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Feb 13 20:22:52.742615 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Feb 13 20:22:52.742620 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Feb 13 20:22:52.742625 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Feb 13 20:22:52.742631 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Feb 13 20:22:52.742636 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Feb 13 20:22:52.742641 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Feb 13 20:22:52.742646 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Feb 13 20:22:52.742651 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Feb 13 20:22:52.742656 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Feb 13 20:22:52.742661 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Feb 13 20:22:52.742667 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Feb 13 20:22:52.742672 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Feb 13 20:22:52.742677 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Feb 13 20:22:52.742682 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Feb 13 20:22:52.742687 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Feb 13 20:22:52.742692 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Feb 13 20:22:52.742697 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Feb 13 20:22:52.742702 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Feb 13 20:22:52.742707 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Feb 13 20:22:52.742712 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Feb 13 20:22:52.742719 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Feb 13 20:22:52.742723 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Feb 13 20:22:52.742728 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Feb 13 20:22:52.742733 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Feb 13 20:22:52.742738 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Feb 13 20:22:52.742743 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Feb 13 20:22:52.742748 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Feb 13 20:22:52.742754 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Feb 13 20:22:52.742758 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Feb 13 20:22:52.742764 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Feb 13 20:22:52.742770 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Feb 13 20:22:52.742775 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Feb 13 20:22:52.742780 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Feb 13 20:22:52.742790 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Feb 13 20:22:52.742796 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Feb 13 20:22:52.742802 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Feb 13 20:22:52.742807 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Feb 13 20:22:52.742812 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Feb 13 20:22:52.742818 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Feb 13 20:22:52.742824 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Feb 13 20:22:52.742829 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Feb 13 20:22:52.742845 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Feb 13 20:22:52.742852 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Feb 13 20:22:52.742860 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Feb 13 20:22:52.742869 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Feb 13 20:22:52.742874 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Feb 13 20:22:52.742880 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Feb 13 20:22:52.742885 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Feb 13 20:22:52.742891 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Feb 13 20:22:52.742898 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Feb 13 20:22:52.742907 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Feb 13 20:22:52.742913 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Feb 13 20:22:52.742919 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Feb 13 20:22:52.742924 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Feb 13 20:22:52.742930 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Feb 13 20:22:52.742935 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Feb 13 20:22:52.742940 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Feb 13 20:22:52.742946 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Feb 13 20:22:52.742951 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Feb 13 20:22:52.742958 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Feb 13 20:22:52.742963 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Feb 13 20:22:52.742969 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Feb 13 20:22:52.742974 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Feb 13 20:22:52.742979 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Feb 13 20:22:52.742985 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Feb 13 20:22:52.742990 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Feb 13 20:22:52.742995 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Feb 13 20:22:52.743001 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Feb 13 20:22:52.743006 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Feb 13 20:22:52.743012 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Feb 13 20:22:52.743018 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Feb 13 20:22:52.743023 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Feb 13 20:22:52.743029 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Feb 13 20:22:52.743034 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Feb 13 20:22:52.743039 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Feb 13 20:22:52.743044 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Feb 13 20:22:52.743049 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Feb 13 20:22:52.743055 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Feb 13 20:22:52.743060 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Feb 13 20:22:52.743067 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Feb 13 20:22:52.743072 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Feb 13 20:22:52.743077 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Feb 13 20:22:52.743083 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Feb 13 20:22:52.743088 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Feb 13 20:22:52.743093 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Feb 13 20:22:52.743099 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Feb 13 20:22:52.743104 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Feb 13 20:22:52.743109 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Feb 13 20:22:52.743114 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Feb 13 20:22:52.743120 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Feb 13 20:22:52.743126 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Feb 13 20:22:52.743132 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Feb 13 20:22:52.743137 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Feb 13 20:22:52.743142 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Feb 13 20:22:52.743147 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Feb 13 20:22:52.743153 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Feb 13 20:22:52.743159 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Feb 13 20:22:52.743164 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Feb 13 20:22:52.743169 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Feb 13 20:22:52.743175 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Feb 13 20:22:52.743181 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Feb 13 20:22:52.743187 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Feb 13 20:22:52.743192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:22:52.745311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:22:52.745323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Feb 13 20:22:52.745329 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Feb 13 20:22:52.745335 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Feb 13 20:22:52.745340 kernel: Zone ranges: Feb 13 20:22:52.745346 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:22:52.745355 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Feb 13 20:22:52.745360 kernel: Normal empty Feb 13 20:22:52.745366 kernel: Movable zone start for each node Feb 13 20:22:52.745371 kernel: Early memory node ranges Feb 13 20:22:52.745377 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Feb 13 20:22:52.745382 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Feb 13 20:22:52.745388 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Feb 13 20:22:52.745394 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Feb 13 20:22:52.745399 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:22:52.745405 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Feb 13 20:22:52.745411 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Feb 13 20:22:52.745417 kernel: ACPI: PM-Timer IO Port: 0x1008 Feb 13 20:22:52.745422 kernel: system APIC only can use physical flat Feb 13 20:22:52.745428 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Feb 13 20:22:52.745434 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 20:22:52.745439 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 20:22:52.745444 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 20:22:52.745450 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 20:22:52.745455 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 20:22:52.745462 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 20:22:52.745467 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 20:22:52.745473 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 20:22:52.745478 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 20:22:52.745483 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 20:22:52.745489 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 20:22:52.745494 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 20:22:52.745499 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 20:22:52.745505 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 20:22:52.745510 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 20:22:52.745517 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 20:22:52.745522 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Feb 13 20:22:52.745527 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Feb 13 20:22:52.745533 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Feb 13 20:22:52.745538 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Feb 13 20:22:52.745543 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Feb 13 20:22:52.745549 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Feb 13 20:22:52.745554 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Feb 13 20:22:52.745560 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Feb 13 20:22:52.745565 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Feb 13 20:22:52.745572 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Feb 13 20:22:52.745577 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Feb 13 20:22:52.745582 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Feb 13 20:22:52.745588 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Feb 13 20:22:52.745594 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Feb 13 20:22:52.745599 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Feb 13 20:22:52.745604 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Feb 13 20:22:52.745610 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Feb 13 20:22:52.745615 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Feb 13 20:22:52.745621 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Feb 13 20:22:52.745627 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Feb 13 20:22:52.745633 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Feb 13 20:22:52.745638 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Feb 13 20:22:52.745644 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Feb 13 20:22:52.745649 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Feb 13 20:22:52.745654 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Feb 13 20:22:52.745660 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Feb 13 20:22:52.745665 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Feb 13 20:22:52.745670 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Feb 13 20:22:52.745677 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Feb 13 20:22:52.745682 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Feb 13 20:22:52.745688 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Feb 13 20:22:52.745693 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Feb 13 20:22:52.745698 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Feb 13 20:22:52.745704 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Feb 13 20:22:52.745709 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Feb 13 20:22:52.745714 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Feb 13 20:22:52.745720 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Feb 13 20:22:52.745727 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Feb 13 20:22:52.745732 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Feb 13 20:22:52.745738 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Feb 13 20:22:52.745743 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Feb 13 20:22:52.745749 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Feb 13 20:22:52.745754 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Feb 13 20:22:52.745760 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Feb 13 20:22:52.745765 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Feb 13 20:22:52.745770 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Feb 13 20:22:52.745776 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Feb 13 20:22:52.745782 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Feb 13 20:22:52.745787 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Feb 13 20:22:52.745793 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Feb 13 20:22:52.745798 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Feb 13 20:22:52.745803 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Feb 13 20:22:52.745809 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Feb 13 20:22:52.745814 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Feb 13 20:22:52.745820 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Feb 13 20:22:52.745825 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Feb 13 20:22:52.745830 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Feb 13 20:22:52.745837 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Feb 13 20:22:52.745842 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Feb 13 20:22:52.745847 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Feb 13 20:22:52.745853 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Feb 13 20:22:52.745858 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Feb 13 20:22:52.745864 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Feb 13 20:22:52.745869 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Feb 13 20:22:52.745874 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Feb 13 20:22:52.745880 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Feb 13 20:22:52.745886 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Feb 13 20:22:52.745891 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Feb 13 20:22:52.745897 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Feb 13 20:22:52.745902 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Feb 13 20:22:52.745908 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Feb 13 20:22:52.745913 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Feb 13 20:22:52.745918 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Feb 13 20:22:52.745924 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Feb 13 20:22:52.745929 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Feb 13 20:22:52.745934 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Feb 13 20:22:52.745941 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Feb 13 20:22:52.745946 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Feb 13 20:22:52.745951 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Feb 13 20:22:52.745957 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Feb 13 20:22:52.745962 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Feb 13 20:22:52.745968 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Feb 13 20:22:52.745973 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Feb 13 20:22:52.745978 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Feb 13 20:22:52.745984 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Feb 13 20:22:52.745989 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Feb 13 20:22:52.745995 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Feb 13 20:22:52.746000 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Feb 13 20:22:52.746006 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Feb 13 20:22:52.746011 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Feb 13 20:22:52.746016 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Feb 13 20:22:52.746022 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Feb 13 20:22:52.746027 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Feb 13 20:22:52.746032 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Feb 13 20:22:52.746038 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Feb 13 20:22:52.746043 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Feb 13 20:22:52.746050 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Feb 13 20:22:52.746055 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Feb 13 20:22:52.746060 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Feb 13 20:22:52.746065 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Feb 13 20:22:52.746071 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Feb 13 20:22:52.746076 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Feb 13 20:22:52.746082 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Feb 13 20:22:52.746087 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Feb 13 20:22:52.746093 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Feb 13 20:22:52.746099 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Feb 13 20:22:52.746104 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Feb 13 20:22:52.746110 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Feb 13 20:22:52.746115 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Feb 13 20:22:52.746121 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Feb 13 20:22:52.746126 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Feb 13 20:22:52.746131 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:22:52.746137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Feb 13 20:22:52.746142 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:22:52.746148 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Feb 13 20:22:52.746154 kernel: TSC deadline timer available Feb 13 20:22:52.746160 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Feb 13 20:22:52.746165 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Feb 13 20:22:52.746171 kernel: Booting paravirtualized kernel on VMware hypervisor Feb 13 20:22:52.746176 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:22:52.746182 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Feb 13 20:22:52.746187 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 20:22:52.746193 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 20:22:52.748220 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Feb 13 20:22:52.748230 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Feb 13 20:22:52.748236 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Feb 13 20:22:52.748241 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Feb 13 20:22:52.748247 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Feb 13 20:22:52.748261 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Feb 13 20:22:52.748268 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Feb 13 20:22:52.748275 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Feb 13 20:22:52.748280 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Feb 13 20:22:52.748287 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Feb 13 20:22:52.748293 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Feb 13 20:22:52.748298 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Feb 13 20:22:52.748304 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Feb 13 20:22:52.748310 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Feb 13 20:22:52.748316 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Feb 13 20:22:52.748322 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Feb 13 20:22:52.748328 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:22:52.748336 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:22:52.748342 kernel: random: crng init done Feb 13 20:22:52.748348 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 13 20:22:52.748354 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Feb 13 20:22:52.748360 kernel: printk: log_buf_len min size: 262144 bytes Feb 13 20:22:52.748365 kernel: printk: log_buf_len: 1048576 bytes Feb 13 20:22:52.748372 kernel: printk: early log buf free: 239648(91%) Feb 13 20:22:52.748378 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:22:52.748383 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:22:52.748390 kernel: Fallback order for Node 0: 0 Feb 13 20:22:52.748396 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Feb 13 20:22:52.748402 kernel: Policy zone: DMA32 Feb 13 20:22:52.748408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:22:52.748415 kernel: Memory: 1936376K/2096628K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 159992K reserved, 0K cma-reserved) Feb 13 20:22:52.748422 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Feb 13 20:22:52.748429 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:22:52.748435 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:22:52.748442 kernel: Dynamic Preempt: voluntary Feb 13 20:22:52.748448 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:22:52.748455 kernel: rcu: RCU event tracing is enabled. Feb 13 20:22:52.748461 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Feb 13 20:22:52.748467 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:22:52.748472 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:22:52.748478 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:22:52.748485 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:22:52.748491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Feb 13 20:22:52.748497 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Feb 13 20:22:52.748503 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Feb 13 20:22:52.748509 kernel: Console: colour VGA+ 80x25 Feb 13 20:22:52.748514 kernel: printk: console [tty0] enabled Feb 13 20:22:52.748520 kernel: printk: console [ttyS0] enabled Feb 13 20:22:52.748526 kernel: ACPI: Core revision 20230628 Feb 13 20:22:52.748532 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Feb 13 20:22:52.748540 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:22:52.748546 kernel: x2apic enabled Feb 13 20:22:52.748552 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:22:52.748558 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:22:52.748564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 13 20:22:52.748570 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Feb 13 20:22:52.748576 kernel: Disabled fast string operations Feb 13 20:22:52.748582 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:22:52.748588 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:22:52.748595 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:22:52.748601 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:22:52.748607 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:22:52.748613 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 20:22:52.748619 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:22:52.748625 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 20:22:52.748631 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 20:22:52.748637 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:22:52.748643 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:22:52.748650 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:22:52.748656 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 20:22:52.748662 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 20:22:52.748668 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:22:52.748674 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:22:52.748680 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:22:52.748685 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:22:52.748691 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 20:22:52.748697 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:22:52.748704 kernel: pid_max: default: 131072 minimum: 1024 Feb 13 20:22:52.748710 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:22:52.748716 kernel: landlock: Up and running. Feb 13 20:22:52.748722 kernel: SELinux: Initializing. Feb 13 20:22:52.748727 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:22:52.748734 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:22:52.748739 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 20:22:52.748745 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Feb 13 20:22:52.748752 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Feb 13 20:22:52.748759 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Feb 13 20:22:52.748765 kernel: Performance Events: Skylake events, core PMU driver. Feb 13 20:22:52.748771 kernel: core: CPUID marked event: 'cpu cycles' unavailable Feb 13 20:22:52.748778 kernel: core: CPUID marked event: 'instructions' unavailable Feb 13 20:22:52.748784 kernel: core: CPUID marked event: 'bus cycles' unavailable Feb 13 20:22:52.748789 kernel: core: CPUID marked event: 'cache references' unavailable Feb 13 20:22:52.748795 kernel: core: CPUID marked event: 'cache misses' unavailable Feb 13 20:22:52.748802 kernel: core: CPUID marked event: 'branch instructions' unavailable Feb 13 20:22:52.748808 kernel: core: CPUID marked event: 'branch misses' unavailable Feb 13 20:22:52.748814 kernel: ... version: 1 Feb 13 20:22:52.748820 kernel: ... bit width: 48 Feb 13 20:22:52.748826 kernel: ... generic registers: 4 Feb 13 20:22:52.748832 kernel: ... value mask: 0000ffffffffffff Feb 13 20:22:52.748838 kernel: ... max period: 000000007fffffff Feb 13 20:22:52.748843 kernel: ... fixed-purpose events: 0 Feb 13 20:22:52.748849 kernel: ... event mask: 000000000000000f Feb 13 20:22:52.748855 kernel: signal: max sigframe size: 1776 Feb 13 20:22:52.748862 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:22:52.748868 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:22:52.748874 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:22:52.748880 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:22:52.748886 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:22:52.748892 kernel: .... node #0, CPUs: #1 Feb 13 20:22:52.748898 kernel: Disabled fast string operations Feb 13 20:22:52.748903 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Feb 13 20:22:52.748909 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 20:22:52.748915 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:22:52.748922 kernel: smpboot: Max logical packages: 128 Feb 13 20:22:52.748928 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Feb 13 20:22:52.748933 kernel: devtmpfs: initialized Feb 13 20:22:52.748939 kernel: x86/mm: Memory block size: 128MB Feb 13 20:22:52.748945 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Feb 13 20:22:52.748951 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:22:52.748957 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 13 20:22:52.748963 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:22:52.748969 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:22:52.748976 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:22:52.748982 kernel: audit: type=2000 audit(1739478171.068:1): state=initialized audit_enabled=0 res=1 Feb 13 20:22:52.748988 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:22:52.748994 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:22:52.749000 kernel: cpuidle: using governor menu Feb 13 20:22:52.749006 kernel: Simple Boot Flag at 0x36 set to 0x80 Feb 13 20:22:52.749012 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:22:52.749017 kernel: dca service started, version 1.12.1 Feb 13 20:22:52.749023 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Feb 13 20:22:52.749030 kernel: PCI: Using configuration type 1 for base access Feb 13 20:22:52.749036 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:22:52.749042 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:22:52.749048 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:22:52.749054 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:22:52.749059 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:22:52.749065 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:22:52.749071 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:22:52.749077 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:22:52.749084 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:22:52.749090 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:22:52.749096 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 13 20:22:52.749102 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:22:52.749107 kernel: ACPI: Interpreter enabled Feb 13 20:22:52.749113 kernel: ACPI: PM: (supports S0 S1 S5) Feb 13 20:22:52.749119 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:22:52.749125 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:22:52.749131 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:22:52.749138 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Feb 13 20:22:52.749144 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Feb 13 20:22:52.750030 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:22:52.750094 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Feb 13 20:22:52.750146 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Feb 13 20:22:52.750155 kernel: PCI host bridge to bus 0000:00 Feb 13 20:22:52.750316 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:22:52.750369 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Feb 13 20:22:52.750414 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 20:22:52.750458 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:22:52.750502 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Feb 13 20:22:52.750549 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Feb 13 20:22:52.750610 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Feb 13 20:22:52.750670 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Feb 13 20:22:52.750730 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Feb 13 20:22:52.750784 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Feb 13 20:22:52.750835 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Feb 13 20:22:52.750885 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:22:52.750935 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:22:52.750986 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:22:52.751038 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:22:52.751093 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Feb 13 20:22:52.751143 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Feb 13 20:22:52.751192 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Feb 13 20:22:52.753757 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Feb 13 20:22:52.753816 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Feb 13 20:22:52.753873 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Feb 13 20:22:52.753927 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Feb 13 20:22:52.753977 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Feb 13 20:22:52.754026 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Feb 13 20:22:52.754075 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Feb 13 20:22:52.754124 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Feb 13 20:22:52.754173 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:22:52.754254 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Feb 13 20:22:52.754310 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754361 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754416 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754467 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754524 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754578 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754634 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754684 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754738 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754788 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754844 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754896 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754950 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755000 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755054 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755103 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755156 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755232 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755289 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755338 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755391 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755441 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755496 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755550 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755604 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755655 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755708 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755758 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755811 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755864 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755916 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755967 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.756021 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.756070 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.756124 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.756177 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759272 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759338 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759401 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759453 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759507 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759569 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759628 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759699 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759765 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759818 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759873 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759925 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759983 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.760034 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.760088 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.760139 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.760194 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762264 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762327 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762378 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762434 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762484 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762540 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762591 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762649 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762699 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762753 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762803 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762858 kernel: pci_bus 0000:01: extended config space not accessible Feb 13 20:22:52.762912 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:22:52.763025 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 20:22:52.763070 kernel: acpiphp: Slot [32] registered Feb 13 20:22:52.763098 kernel: acpiphp: Slot [33] registered Feb 13 20:22:52.763126 kernel: acpiphp: Slot [34] registered Feb 13 20:22:52.763149 kernel: acpiphp: Slot [35] registered Feb 13 20:22:52.763174 kernel: acpiphp: Slot [36] registered Feb 13 20:22:52.763248 kernel: acpiphp: Slot [37] registered Feb 13 20:22:52.763275 kernel: acpiphp: Slot [38] registered Feb 13 20:22:52.763300 kernel: acpiphp: Slot [39] registered Feb 13 20:22:52.763322 kernel: acpiphp: Slot [40] registered Feb 13 20:22:52.763331 kernel: acpiphp: Slot [41] registered Feb 13 20:22:52.763337 kernel: acpiphp: Slot [42] registered Feb 13 20:22:52.763343 kernel: acpiphp: Slot [43] registered Feb 13 20:22:52.763349 kernel: acpiphp: Slot [44] registered Feb 13 20:22:52.763355 kernel: acpiphp: Slot [45] registered Feb 13 20:22:52.763361 kernel: acpiphp: Slot [46] registered Feb 13 20:22:52.763367 kernel: acpiphp: Slot [47] registered Feb 13 20:22:52.763373 kernel: acpiphp: Slot [48] registered Feb 13 20:22:52.763378 kernel: acpiphp: Slot [49] registered Feb 13 20:22:52.763386 kernel: acpiphp: Slot [50] registered Feb 13 20:22:52.763391 kernel: acpiphp: Slot [51] registered Feb 13 20:22:52.763397 kernel: acpiphp: Slot [52] registered Feb 13 20:22:52.763403 kernel: acpiphp: Slot [53] registered Feb 13 20:22:52.763409 kernel: acpiphp: Slot [54] registered Feb 13 20:22:52.763415 kernel: acpiphp: Slot [55] registered Feb 13 20:22:52.763421 kernel: acpiphp: Slot [56] registered Feb 13 20:22:52.763427 kernel: acpiphp: Slot [57] registered Feb 13 20:22:52.763432 kernel: acpiphp: Slot [58] registered Feb 13 20:22:52.763438 kernel: acpiphp: Slot [59] registered Feb 13 20:22:52.763445 kernel: acpiphp: Slot [60] registered Feb 13 20:22:52.763451 kernel: acpiphp: Slot [61] registered Feb 13 20:22:52.763461 kernel: acpiphp: Slot [62] registered Feb 13 20:22:52.763467 kernel: acpiphp: Slot [63] registered Feb 13 20:22:52.763530 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Feb 13 20:22:52.763582 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 13 20:22:52.763630 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 13 20:22:52.763678 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 13 20:22:52.763729 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Feb 13 20:22:52.763778 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Feb 13 20:22:52.763826 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Feb 13 20:22:52.763875 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Feb 13 20:22:52.763923 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Feb 13 20:22:52.763979 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Feb 13 20:22:52.764030 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Feb 13 20:22:52.764085 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Feb 13 20:22:52.764136 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 13 20:22:52.764186 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.765050 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 13 20:22:52.765113 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 13 20:22:52.765168 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 13 20:22:52.765230 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 13 20:22:52.765283 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 13 20:22:52.765341 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 13 20:22:52.765390 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 13 20:22:52.765439 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 13 20:22:52.765490 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 13 20:22:52.765539 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 13 20:22:52.765587 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 13 20:22:52.765636 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 13 20:22:52.765689 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 13 20:22:52.765737 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 13 20:22:52.765787 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 13 20:22:52.765838 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 13 20:22:52.765886 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 13 20:22:52.765935 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 13 20:22:52.765988 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 13 20:22:52.766036 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 13 20:22:52.766085 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 13 20:22:52.766135 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 13 20:22:52.766184 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 13 20:22:52.766242 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 13 20:22:52.766295 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 13 20:22:52.766344 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 13 20:22:52.766393 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 13 20:22:52.766450 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Feb 13 20:22:52.766502 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Feb 13 20:22:52.766552 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Feb 13 20:22:52.766603 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Feb 13 20:22:52.766652 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Feb 13 20:22:52.766704 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 13 20:22:52.766755 kernel: pci 0000:0b:00.0: supports D1 D2 Feb 13 20:22:52.766805 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:22:52.766856 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 13 20:22:52.766906 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 13 20:22:52.766954 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 13 20:22:52.767004 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 13 20:22:52.767054 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 13 20:22:52.767106 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 13 20:22:52.767155 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 13 20:22:52.767759 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 13 20:22:52.767814 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 13 20:22:52.767862 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 13 20:22:52.767909 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 13 20:22:52.767957 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 13 20:22:52.768010 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 13 20:22:52.768058 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 13 20:22:52.768125 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 13 20:22:52.768223 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 13 20:22:52.768292 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 13 20:22:52.768345 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 13 20:22:52.768396 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 13 20:22:52.768444 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 13 20:22:52.768495 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 13 20:22:52.768545 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 13 20:22:52.768593 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 13 20:22:52.768641 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 13 20:22:52.768690 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 13 20:22:52.768738 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 13 20:22:52.768786 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 13 20:22:52.768835 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 13 20:22:52.768885 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 13 20:22:52.768933 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 13 20:22:52.768981 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 13 20:22:52.769031 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 13 20:22:52.769080 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 13 20:22:52.769147 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 13 20:22:52.769201 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 13 20:22:52.769254 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 13 20:22:52.769306 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 13 20:22:52.769361 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 13 20:22:52.769425 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 13 20:22:52.769474 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 13 20:22:52.769521 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 13 20:22:52.769569 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 13 20:22:52.769618 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 13 20:22:52.769734 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 13 20:22:52.769785 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 13 20:22:52.769834 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 13 20:22:52.769882 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 13 20:22:52.769930 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 13 20:22:52.769980 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 13 20:22:52.770029 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 13 20:22:52.770077 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 13 20:22:52.770126 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 13 20:22:52.770178 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 13 20:22:52.770266 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 13 20:22:52.770317 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 13 20:22:52.770365 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 13 20:22:52.770413 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 13 20:22:52.770461 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 13 20:22:52.770511 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 13 20:22:52.770561 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 13 20:22:52.770609 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 13 20:22:52.770657 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 13 20:22:52.770704 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 13 20:22:52.770752 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 13 20:22:52.770800 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 13 20:22:52.770848 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 13 20:22:52.770895 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 13 20:22:52.770945 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 13 20:22:52.770993 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 13 20:22:52.771041 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 13 20:22:52.771090 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 13 20:22:52.771140 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 13 20:22:52.771187 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 13 20:22:52.771319 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 13 20:22:52.771372 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 13 20:22:52.771424 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 13 20:22:52.771472 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 13 20:22:52.771521 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 13 20:22:52.771569 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 13 20:22:52.771617 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 13 20:22:52.771625 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Feb 13 20:22:52.771632 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Feb 13 20:22:52.771638 kernel: ACPI: PCI: Interrupt link LNKB disabled Feb 13 20:22:52.771645 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:22:52.771651 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Feb 13 20:22:52.771657 kernel: iommu: Default domain type: Translated Feb 13 20:22:52.771663 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:22:52.771669 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:22:52.771675 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:22:52.771681 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Feb 13 20:22:52.771687 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Feb 13 20:22:52.771734 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Feb 13 20:22:52.771785 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Feb 13 20:22:52.771833 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:22:52.771841 kernel: vgaarb: loaded Feb 13 20:22:52.771847 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Feb 13 20:22:52.771853 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Feb 13 20:22:52.771859 kernel: clocksource: Switched to clocksource tsc-early Feb 13 20:22:52.771865 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:22:52.771871 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:22:52.771877 kernel: pnp: PnP ACPI init Feb 13 20:22:52.771930 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Feb 13 20:22:52.771975 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Feb 13 20:22:52.772019 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Feb 13 20:22:52.772066 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Feb 13 20:22:52.772112 kernel: pnp 00:06: [dma 2] Feb 13 20:22:52.772162 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Feb 13 20:22:52.772214 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Feb 13 20:22:52.772261 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Feb 13 20:22:52.772269 kernel: pnp: PnP ACPI: found 8 devices Feb 13 20:22:52.772275 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:22:52.772281 kernel: NET: Registered PF_INET protocol family Feb 13 20:22:52.772287 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:22:52.772293 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:22:52.772299 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:22:52.772305 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:22:52.772312 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:22:52.772322 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:22:52.772328 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:22:52.772333 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:22:52.772339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:22:52.772345 kernel: NET: Registered PF_XDP protocol family Feb 13 20:22:52.772394 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 13 20:22:52.772444 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 20:22:52.772496 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 20:22:52.772544 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 20:22:52.772592 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 20:22:52.772639 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Feb 13 20:22:52.772687 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Feb 13 20:22:52.772736 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Feb 13 20:22:52.772787 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Feb 13 20:22:52.772836 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Feb 13 20:22:52.772885 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Feb 13 20:22:52.772934 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Feb 13 20:22:52.772982 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Feb 13 20:22:52.773032 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Feb 13 20:22:52.773083 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Feb 13 20:22:52.773131 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Feb 13 20:22:52.773180 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Feb 13 20:22:52.773258 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Feb 13 20:22:52.773308 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Feb 13 20:22:52.773359 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Feb 13 20:22:52.773406 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Feb 13 20:22:52.773454 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Feb 13 20:22:52.773502 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Feb 13 20:22:52.773550 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Feb 13 20:22:52.773597 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Feb 13 20:22:52.773645 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.773697 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.773745 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.773792 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.773839 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.773886 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.773934 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.773982 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774030 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774080 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774130 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774178 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774239 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774289 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774337 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774386 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774433 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774485 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774533 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774580 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774628 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774676 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774723 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774771 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774820 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774870 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774918 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774966 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775014 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775062 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775110 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775158 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775252 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775304 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775361 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775409 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775456 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775504 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775552 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775600 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775648 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775699 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775747 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775795 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775842 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775889 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775937 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775985 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776032 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776079 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776149 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776203 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776273 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776324 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776372 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776420 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776469 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776517 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776567 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776615 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776666 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776714 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776763 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776811 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776859 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776907 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776956 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777004 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777052 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777103 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777168 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777261 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777310 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777356 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777404 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777451 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777497 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777545 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777592 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777643 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777690 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777739 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777787 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777835 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777884 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:22:52.777934 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Feb 13 20:22:52.777982 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 13 20:22:52.778028 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 13 20:22:52.778078 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 13 20:22:52.778163 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Feb 13 20:22:52.778224 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 13 20:22:52.778274 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 13 20:22:52.778330 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 13 20:22:52.778382 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Feb 13 20:22:52.778432 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 13 20:22:52.778481 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 13 20:22:52.778533 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 13 20:22:52.778581 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 13 20:22:52.778631 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 13 20:22:52.778679 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 13 20:22:52.778726 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 13 20:22:52.778774 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 13 20:22:52.778822 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 13 20:22:52.778870 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 13 20:22:52.778919 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 13 20:22:52.778966 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 13 20:22:52.779017 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 13 20:22:52.779065 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 13 20:22:52.779134 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 13 20:22:52.779183 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 13 20:22:52.779242 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 13 20:22:52.779294 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 13 20:22:52.779363 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 13 20:22:52.779415 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 13 20:22:52.779464 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 13 20:22:52.779513 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 13 20:22:52.779562 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 13 20:22:52.779614 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Feb 13 20:22:52.779664 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 13 20:22:52.779714 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 13 20:22:52.779764 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 13 20:22:52.779816 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Feb 13 20:22:52.779866 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 13 20:22:52.779916 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 13 20:22:52.779965 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 13 20:22:52.780014 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 13 20:22:52.780064 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 13 20:22:52.780113 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 13 20:22:52.780177 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 13 20:22:52.782304 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 13 20:22:52.782370 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 13 20:22:52.782422 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 13 20:22:52.782472 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 13 20:22:52.782521 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 13 20:22:52.782569 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 13 20:22:52.782617 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 13 20:22:52.782664 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 13 20:22:52.782713 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 13 20:22:52.782761 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 13 20:22:52.782811 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 13 20:22:52.782858 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 13 20:22:52.782906 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 13 20:22:52.782953 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 13 20:22:52.783001 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 13 20:22:52.783048 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 13 20:22:52.783117 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 13 20:22:52.783182 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 13 20:22:52.783290 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 13 20:22:52.783345 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 13 20:22:52.783399 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 13 20:22:52.783447 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 13 20:22:52.783495 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 13 20:22:52.783543 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 13 20:22:52.783592 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 13 20:22:52.783641 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 13 20:22:52.783689 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 13 20:22:52.783737 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 13 20:22:52.783786 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 13 20:22:52.783836 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 13 20:22:52.783885 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 13 20:22:52.783933 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 13 20:22:52.783981 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 13 20:22:52.784030 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 13 20:22:52.784078 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 13 20:22:52.784161 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 13 20:22:52.784677 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 13 20:22:52.784736 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 13 20:22:52.784788 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 13 20:22:52.784841 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 13 20:22:52.784890 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 13 20:22:52.784939 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 13 20:22:52.784987 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 13 20:22:52.785036 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 13 20:22:52.785085 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 13 20:22:52.785187 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 13 20:22:52.785265 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 13 20:22:52.785321 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 13 20:22:52.785379 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 13 20:22:52.785428 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 13 20:22:52.785476 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 13 20:22:52.785525 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 13 20:22:52.785573 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 13 20:22:52.785621 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 13 20:22:52.785670 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 13 20:22:52.785718 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 13 20:22:52.785766 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 13 20:22:52.785814 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 13 20:22:52.785865 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 13 20:22:52.785914 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 13 20:22:52.785963 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 13 20:22:52.786011 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 13 20:22:52.786059 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 13 20:22:52.786126 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 13 20:22:52.786190 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 13 20:22:52.786288 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 13 20:22:52.786357 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 13 20:22:52.786410 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 13 20:22:52.786459 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 13 20:22:52.786506 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Feb 13 20:22:52.786550 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Feb 13 20:22:52.786612 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Feb 13 20:22:52.786670 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Feb 13 20:22:52.786712 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Feb 13 20:22:52.786759 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Feb 13 20:22:52.786807 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Feb 13 20:22:52.786850 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 13 20:22:52.786893 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Feb 13 20:22:52.786938 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Feb 13 20:22:52.786982 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Feb 13 20:22:52.787026 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Feb 13 20:22:52.787070 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Feb 13 20:22:52.787138 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Feb 13 20:22:52.787213 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Feb 13 20:22:52.787262 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Feb 13 20:22:52.787310 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Feb 13 20:22:52.787355 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Feb 13 20:22:52.787400 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Feb 13 20:22:52.787449 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Feb 13 20:22:52.787497 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Feb 13 20:22:52.787541 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Feb 13 20:22:52.787588 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Feb 13 20:22:52.787633 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Feb 13 20:22:52.787683 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Feb 13 20:22:52.787728 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 13 20:22:52.787777 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Feb 13 20:22:52.787824 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Feb 13 20:22:52.787872 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Feb 13 20:22:52.787918 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Feb 13 20:22:52.787970 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Feb 13 20:22:52.788024 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Feb 13 20:22:52.788079 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Feb 13 20:22:52.788159 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Feb 13 20:22:52.788474 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Feb 13 20:22:52.788530 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Feb 13 20:22:52.788576 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Feb 13 20:22:52.788621 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Feb 13 20:22:52.788670 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Feb 13 20:22:52.788719 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Feb 13 20:22:52.788767 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Feb 13 20:22:52.788817 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Feb 13 20:22:52.788862 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 13 20:22:52.788910 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Feb 13 20:22:52.789471 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 13 20:22:52.789536 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Feb 13 20:22:52.789588 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Feb 13 20:22:52.789638 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Feb 13 20:22:52.789684 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Feb 13 20:22:52.789732 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Feb 13 20:22:52.789777 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 13 20:22:52.789828 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Feb 13 20:22:52.789875 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Feb 13 20:22:52.789919 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 13 20:22:52.789967 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Feb 13 20:22:52.790013 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Feb 13 20:22:52.790057 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Feb 13 20:22:52.790124 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Feb 13 20:22:52.790188 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Feb 13 20:22:52.790410 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Feb 13 20:22:52.790461 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Feb 13 20:22:52.790507 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 13 20:22:52.790556 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Feb 13 20:22:52.790601 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 13 20:22:52.790650 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Feb 13 20:22:52.790698 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Feb 13 20:22:52.790746 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Feb 13 20:22:52.790792 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Feb 13 20:22:52.790841 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Feb 13 20:22:52.790886 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 13 20:22:52.790940 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Feb 13 20:22:52.790992 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Feb 13 20:22:52.791038 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Feb 13 20:22:52.791094 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Feb 13 20:22:52.791139 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Feb 13 20:22:52.791642 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Feb 13 20:22:52.791699 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Feb 13 20:22:52.791750 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Feb 13 20:22:52.791801 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Feb 13 20:22:52.791847 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 13 20:22:52.791897 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Feb 13 20:22:52.791943 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Feb 13 20:22:52.791993 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Feb 13 20:22:52.792042 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Feb 13 20:22:52.792093 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Feb 13 20:22:52.792139 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Feb 13 20:22:52.792189 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Feb 13 20:22:52.792247 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 13 20:22:52.792302 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:22:52.792317 kernel: PCI: CLS 32 bytes, default 64 Feb 13 20:22:52.792325 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:22:52.792332 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 13 20:22:52.792338 kernel: clocksource: Switched to clocksource tsc Feb 13 20:22:52.792345 kernel: Initialise system trusted keyrings Feb 13 20:22:52.792351 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:22:52.792358 kernel: Key type asymmetric registered Feb 13 20:22:52.792364 kernel: Asymmetric key parser 'x509' registered Feb 13 20:22:52.792370 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:22:52.792378 kernel: io scheduler mq-deadline registered Feb 13 20:22:52.792384 kernel: io scheduler kyber registered Feb 13 20:22:52.792391 kernel: io scheduler bfq registered Feb 13 20:22:52.792445 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Feb 13 20:22:52.792497 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792548 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Feb 13 20:22:52.792598 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792649 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Feb 13 20:22:52.792699 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792752 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Feb 13 20:22:52.792803 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792854 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Feb 13 20:22:52.792903 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792953 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Feb 13 20:22:52.793006 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.793055 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Feb 13 20:22:52.793105 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.793170 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Feb 13 20:22:52.793226 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.793281 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Feb 13 20:22:52.795470 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795530 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Feb 13 20:22:52.795581 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795632 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Feb 13 20:22:52.795683 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795732 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Feb 13 20:22:52.795785 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795835 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Feb 13 20:22:52.795884 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795933 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Feb 13 20:22:52.795982 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796031 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Feb 13 20:22:52.796084 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796169 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Feb 13 20:22:52.796576 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796632 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Feb 13 20:22:52.796684 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796737 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Feb 13 20:22:52.796787 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796838 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Feb 13 20:22:52.796887 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796936 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Feb 13 20:22:52.796985 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797035 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Feb 13 20:22:52.797104 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797170 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Feb 13 20:22:52.797232 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797283 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Feb 13 20:22:52.797333 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797385 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Feb 13 20:22:52.797434 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797483 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Feb 13 20:22:52.797532 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797581 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Feb 13 20:22:52.797629 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797679 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Feb 13 20:22:52.797731 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797780 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Feb 13 20:22:52.797828 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797878 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Feb 13 20:22:52.797927 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797979 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Feb 13 20:22:52.798028 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.798077 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Feb 13 20:22:52.798146 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.798231 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Feb 13 20:22:52.798289 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.798298 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:22:52.798305 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:22:52.798312 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:22:52.798321 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Feb 13 20:22:52.798328 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:22:52.798334 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:22:52.798388 kernel: rtc_cmos 00:01: registered as rtc0 Feb 13 20:22:52.798438 kernel: rtc_cmos 00:01: setting system clock to 2025-02-13T20:22:52 UTC (1739478172) Feb 13 20:22:52.798483 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Feb 13 20:22:52.798491 kernel: intel_pstate: CPU model not supported Feb 13 20:22:52.798498 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:22:52.798504 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:22:52.798510 kernel: Segment Routing with IPv6 Feb 13 20:22:52.798517 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:22:52.798523 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:22:52.798531 kernel: Key type dns_resolver registered Feb 13 20:22:52.798537 kernel: IPI shorthand broadcast: enabled Feb 13 20:22:52.798544 kernel: sched_clock: Marking stable (906004910, 228238216)->(1193575914, -59332788) Feb 13 20:22:52.798550 kernel: registered taskstats version 1 Feb 13 20:22:52.798557 kernel: Loading compiled-in X.509 certificates Feb 13 20:22:52.798563 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:22:52.798569 kernel: Key type .fscrypt registered Feb 13 20:22:52.798575 kernel: Key type fscrypt-provisioning registered Feb 13 20:22:52.798581 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:22:52.798589 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:22:52.798596 kernel: ima: No architecture policies found Feb 13 20:22:52.798602 kernel: clk: Disabling unused clocks Feb 13 20:22:52.798608 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:22:52.798614 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:22:52.798621 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:22:52.798627 kernel: Run /init as init process Feb 13 20:22:52.798633 kernel: with arguments: Feb 13 20:22:52.798639 kernel: /init Feb 13 20:22:52.798647 kernel: with environment: Feb 13 20:22:52.798653 kernel: HOME=/ Feb 13 20:22:52.798659 kernel: TERM=linux Feb 13 20:22:52.798665 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:22:52.798672 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:22:52.798680 systemd[1]: Detected virtualization vmware. Feb 13 20:22:52.798687 systemd[1]: Detected architecture x86-64. Feb 13 20:22:52.798693 systemd[1]: Running in initrd. Feb 13 20:22:52.798701 systemd[1]: No hostname configured, using default hostname. Feb 13 20:22:52.798707 systemd[1]: Hostname set to . Feb 13 20:22:52.798714 systemd[1]: Initializing machine ID from random generator. Feb 13 20:22:52.798720 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:22:52.798726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:22:52.798733 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:22:52.798740 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:22:52.798747 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:22:52.798755 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:22:52.798761 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:22:52.798769 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:22:52.798776 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:22:52.798783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:22:52.798789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:22:52.798796 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:22:52.798804 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:22:52.798810 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:22:52.798817 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:22:52.798823 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:22:52.798830 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:22:52.798837 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:22:52.798843 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:22:52.798850 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:22:52.798857 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:22:52.798864 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:22:52.798871 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:22:52.798877 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:22:52.798884 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:22:52.798890 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:22:52.798898 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:22:52.798904 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:22:52.798911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:22:52.798919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:22:52.798925 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:22:52.798943 systemd-journald[215]: Collecting audit messages is disabled. Feb 13 20:22:52.798961 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:22:52.798969 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:22:52.798977 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:22:52.798983 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:22:52.798990 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:22:52.798998 kernel: Bridge firewalling registered Feb 13 20:22:52.799005 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:22:52.799012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:22:52.799020 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:22:52.799027 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:22:52.799033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:22:52.799040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:22:52.799047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:22:52.799053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:22:52.799063 systemd-journald[215]: Journal started Feb 13 20:22:52.799078 systemd-journald[215]: Runtime Journal (/run/log/journal/14aad04a7cba4066bbfd8810d7986c69) is 4.8M, max 38.6M, 33.8M free. Feb 13 20:22:52.742153 systemd-modules-load[216]: Inserted module 'overlay' Feb 13 20:22:52.799292 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:22:52.764705 systemd-modules-load[216]: Inserted module 'br_netfilter' Feb 13 20:22:52.806274 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:22:52.808056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:22:52.814328 dracut-cmdline[245]: dracut-dracut-053 Feb 13 20:22:52.813480 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:22:52.814704 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:22:52.816916 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:22:52.833493 systemd-resolved[258]: Positive Trust Anchors: Feb 13 20:22:52.833502 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:22:52.833524 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:22:52.835508 systemd-resolved[258]: Defaulting to hostname 'linux'. Feb 13 20:22:52.836077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:22:52.836223 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:22:52.858213 kernel: SCSI subsystem initialized Feb 13 20:22:52.864207 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:22:52.871218 kernel: iscsi: registered transport (tcp) Feb 13 20:22:52.884531 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:22:52.884568 kernel: QLogic iSCSI HBA Driver Feb 13 20:22:52.904047 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:22:52.910309 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:22:52.925962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:22:52.926018 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:22:52.926027 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:22:52.957253 kernel: raid6: avx2x4 gen() 51977 MB/s Feb 13 20:22:52.974243 kernel: raid6: avx2x2 gen() 52154 MB/s Feb 13 20:22:52.991441 kernel: raid6: avx2x1 gen() 44238 MB/s Feb 13 20:22:52.991484 kernel: raid6: using algorithm avx2x2 gen() 52154 MB/s Feb 13 20:22:53.009451 kernel: raid6: .... xor() 31415 MB/s, rmw enabled Feb 13 20:22:53.009496 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:22:53.023213 kernel: xor: automatically using best checksumming function avx Feb 13 20:22:53.126217 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:22:53.131722 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:22:53.136280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:22:53.144385 systemd-udevd[432]: Using default interface naming scheme 'v255'. Feb 13 20:22:53.147342 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:22:53.157426 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:22:53.164565 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Feb 13 20:22:53.179642 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:22:53.184289 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:22:53.252596 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:22:53.258374 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:22:53.265243 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:22:53.266023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:22:53.266854 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:22:53.267086 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:22:53.273314 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:22:53.281445 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:22:53.317209 kernel: VMware PVSCSI driver - version 1.0.7.0-k Feb 13 20:22:53.319207 kernel: libata version 3.00 loaded. Feb 13 20:22:53.322207 kernel: ata_piix 0000:00:07.1: version 2.13 Feb 13 20:22:53.333189 kernel: scsi host0: ata_piix Feb 13 20:22:53.333285 kernel: scsi host1: ata_piix Feb 13 20:22:53.333361 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Feb 13 20:22:53.333378 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Feb 13 20:22:53.333386 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Feb 13 20:22:53.334546 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Feb 13 20:22:53.334560 kernel: vmw_pvscsi: using 64bit dma Feb 13 20:22:53.334572 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Feb 13 20:22:53.335797 kernel: vmw_pvscsi: max_id: 16 Feb 13 20:22:53.335817 kernel: vmw_pvscsi: setting ring_pages to 8 Feb 13 20:22:53.338708 kernel: vmw_pvscsi: enabling reqCallThreshold Feb 13 20:22:53.338735 kernel: vmw_pvscsi: driver-based request coalescing enabled Feb 13 20:22:53.338751 kernel: vmw_pvscsi: using MSI-X Feb 13 20:22:53.341206 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Feb 13 20:22:53.345692 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 Feb 13 20:22:53.347581 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:22:53.347597 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Feb 13 20:22:53.353097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:22:53.353555 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:22:53.353943 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:22:53.354236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:22:53.354425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:22:53.354698 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:22:53.359372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:22:53.370459 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:22:53.374296 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:22:53.387526 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:22:53.485216 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Feb 13 20:22:53.504339 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Feb 13 20:22:53.511039 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Feb 13 20:22:53.514209 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:22:53.514233 kernel: AES CTR mode by8 optimization enabled Feb 13 20:22:53.527220 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Feb 13 20:22:53.537283 kernel: sd 2:0:0:0: [sda] Write Protect is off Feb 13 20:22:53.537364 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 Feb 13 20:22:53.537426 kernel: sd 2:0:0:0: [sda] Cache data unavailable Feb 13 20:22:53.537493 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through Feb 13 20:22:53.537554 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Feb 13 20:22:53.546160 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:22:53.546172 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:22:53.546180 kernel: sd 2:0:0:0: [sda] Attached SCSI disk Feb 13 20:22:53.546279 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 13 20:22:53.581210 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (485) Feb 13 20:22:53.584143 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Feb 13 20:22:53.589232 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (483) Feb 13 20:22:53.588990 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Feb 13 20:22:53.592040 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Feb 13 20:22:53.594219 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Feb 13 20:22:53.594511 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Feb 13 20:22:53.598512 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:22:53.623219 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:22:53.627209 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:22:54.630709 disk-uuid[587]: The operation has completed successfully. Feb 13 20:22:54.631356 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:22:54.676994 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:22:54.677081 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:22:54.682318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:22:54.684755 sh[604]: Success Feb 13 20:22:54.697227 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:22:54.776254 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:22:54.777425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:22:54.777755 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:22:54.794644 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:22:54.794683 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:22:54.794692 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:22:54.796583 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:22:54.796609 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:22:54.804225 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:22:54.805146 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:22:54.814373 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Feb 13 20:22:54.815846 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:22:54.843226 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:54.843269 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:22:54.843278 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:22:54.848212 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:22:54.856174 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:22:54.857262 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:54.866738 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:22:54.871260 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:22:54.889421 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Feb 13 20:22:54.894277 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:22:54.949606 ignition[664]: Ignition 2.19.0 Feb 13 20:22:54.949613 ignition[664]: Stage: fetch-offline Feb 13 20:22:54.949633 ignition[664]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:54.949639 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:54.949694 ignition[664]: parsed url from cmdline: "" Feb 13 20:22:54.949696 ignition[664]: no config URL provided Feb 13 20:22:54.949699 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:22:54.949703 ignition[664]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:22:54.950060 ignition[664]: config successfully fetched Feb 13 20:22:54.950077 ignition[664]: parsing config with SHA512: f7f3e6b80c8fa390a3b629a0052a142e7b16264b82042650ccdbb5895c0f94dfc25f042ae1610eaed95f765132c90ffc360f50377bbc1de25558a87ceb59a35f Feb 13 20:22:54.954857 unknown[664]: fetched base config from "system" Feb 13 20:22:54.954864 unknown[664]: fetched user config from "vmware" Feb 13 20:22:54.955150 ignition[664]: fetch-offline: fetch-offline passed Feb 13 20:22:54.955192 ignition[664]: Ignition finished successfully Feb 13 20:22:54.956171 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:22:54.968055 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:22:54.972322 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:22:54.983971 systemd-networkd[799]: lo: Link UP Feb 13 20:22:54.984210 systemd-networkd[799]: lo: Gained carrier Feb 13 20:22:54.985009 systemd-networkd[799]: Enumeration completed Feb 13 20:22:54.985178 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:22:54.985450 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Feb 13 20:22:54.986648 systemd[1]: Reached target network.target - Network. Feb 13 20:22:54.987747 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:22:54.988832 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 13 20:22:54.988939 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 13 20:22:54.988897 systemd-networkd[799]: ens192: Link UP Feb 13 20:22:54.988900 systemd-networkd[799]: ens192: Gained carrier Feb 13 20:22:54.993537 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:22:55.001818 ignition[801]: Ignition 2.19.0 Feb 13 20:22:55.002064 ignition[801]: Stage: kargs Feb 13 20:22:55.002173 ignition[801]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:55.002180 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:55.002884 ignition[801]: kargs: kargs passed Feb 13 20:22:55.002910 ignition[801]: Ignition finished successfully Feb 13 20:22:55.004260 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:22:55.008317 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:22:55.015265 ignition[808]: Ignition 2.19.0 Feb 13 20:22:55.015272 ignition[808]: Stage: disks Feb 13 20:22:55.015387 ignition[808]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:55.015393 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:55.016385 ignition[808]: disks: disks passed Feb 13 20:22:55.016418 ignition[808]: Ignition finished successfully Feb 13 20:22:55.017399 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:22:55.017700 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:22:55.017908 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:22:55.018143 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:22:55.018352 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:22:55.018562 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:22:55.022328 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:22:55.033008 systemd-fsck[816]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:22:55.034648 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:22:55.037308 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:22:55.095209 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:22:55.095496 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:22:55.095866 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:22:55.107301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:22:55.108860 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:22:55.109130 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:22:55.109156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:22:55.109170 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:22:55.112559 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:22:55.113126 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:22:55.116317 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (824) Feb 13 20:22:55.118668 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:55.118688 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:22:55.118697 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:22:55.122216 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:22:55.123406 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:22:55.144252 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:22:55.146601 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:22:55.148704 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:22:55.150971 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:22:55.201622 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:22:55.205391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:22:55.206622 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:22:55.211207 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:55.226435 ignition[937]: INFO : Ignition 2.19.0 Feb 13 20:22:55.226435 ignition[937]: INFO : Stage: mount Feb 13 20:22:55.226806 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:55.226806 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:55.227422 ignition[937]: INFO : mount: mount passed Feb 13 20:22:55.227855 ignition[937]: INFO : Ignition finished successfully Feb 13 20:22:55.228113 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:22:55.232269 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:22:55.276168 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:22:55.295356 systemd-resolved[258]: Detected conflict on linux IN A 139.178.70.108 Feb 13 20:22:55.295366 systemd-resolved[258]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 13 20:22:55.793057 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:22:55.800417 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:22:55.809216 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (948) Feb 13 20:22:55.812229 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:55.812250 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:22:55.812266 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:22:55.817214 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:22:55.817851 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:22:55.835764 ignition[965]: INFO : Ignition 2.19.0 Feb 13 20:22:55.835764 ignition[965]: INFO : Stage: files Feb 13 20:22:55.836236 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:55.836236 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:55.836586 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:22:55.837337 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:22:55.837337 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:22:55.839710 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:22:55.839921 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:22:55.840127 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:22:55.840001 unknown[965]: wrote ssh authorized keys file for user: core Feb 13 20:22:55.841822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:22:55.842050 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:22:55.876040 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:22:55.957942 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:22:56.339337 systemd-networkd[799]: ens192: Gained IPv6LL Feb 13 20:22:56.458733 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:22:56.674930 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:22:56.674930 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 13 20:22:56.675385 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 13 20:22:56.675385 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 20:22:56.675385 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:22:56.714750 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:22:56.716959 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:22:56.717133 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:22:56.717133 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:22:56.717133 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:22:56.718106 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:22:56.718106 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:22:56.718106 ignition[965]: INFO : files: files passed Feb 13 20:22:56.718106 ignition[965]: INFO : Ignition finished successfully Feb 13 20:22:56.717841 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:22:56.722298 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:22:56.723375 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:22:56.724481 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:22:56.724660 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:22:56.729876 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:22:56.729876 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:22:56.730794 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:22:56.731945 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:22:56.732350 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:22:56.735361 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:22:56.747675 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:22:56.747732 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:22:56.748405 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:22:56.748689 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:22:56.748955 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:22:56.749663 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:22:56.759732 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:22:56.763296 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:22:56.768698 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:22:56.768989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:22:56.769321 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:22:56.769590 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:22:56.769658 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:22:56.770185 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:22:56.770496 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:22:56.770786 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:22:56.771266 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:22:56.771571 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:22:56.771846 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:22:56.772150 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:22:56.772474 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:22:56.772631 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:22:56.773019 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:22:56.773286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:22:56.773356 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:22:56.773847 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:22:56.774006 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:22:56.774261 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:22:56.774321 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:22:56.774605 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:22:56.774672 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:22:56.775153 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:22:56.775246 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:22:56.775565 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:22:56.775943 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:22:56.775992 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:22:56.776162 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:22:56.776959 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:22:56.777101 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:22:56.777160 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:22:56.777318 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:22:56.777369 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:22:56.777527 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:22:56.777588 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:22:56.777766 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:22:56.777824 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:22:56.785430 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:22:56.785530 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:22:56.785601 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:22:56.787344 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:22:56.787444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:22:56.787639 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:22:56.788085 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:22:56.788169 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:22:56.791861 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:22:56.791915 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:22:56.793601 ignition[1020]: INFO : Ignition 2.19.0 Feb 13 20:22:56.796555 ignition[1020]: INFO : Stage: umount Feb 13 20:22:56.796555 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:56.796555 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:56.796555 ignition[1020]: INFO : umount: umount passed Feb 13 20:22:56.796555 ignition[1020]: INFO : Ignition finished successfully Feb 13 20:22:56.797439 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:22:56.797516 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:22:56.797743 systemd[1]: Stopped target network.target - Network. Feb 13 20:22:56.797828 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:22:56.797856 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:22:56.797958 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:22:56.797979 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:22:56.798077 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:22:56.798099 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:22:56.798207 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:22:56.798231 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:22:56.798401 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:22:56.798883 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:22:56.804001 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:22:56.804086 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:22:56.806702 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:22:56.807115 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:22:56.807142 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:22:56.808843 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:22:56.808913 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:22:56.809650 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:22:56.809686 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:22:56.813278 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:22:56.813639 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:22:56.813680 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:22:56.813939 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Feb 13 20:22:56.813962 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Feb 13 20:22:56.814086 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:22:56.814108 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:22:56.814255 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:22:56.814277 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:22:56.814438 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:22:56.820849 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:22:56.820923 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:22:56.824522 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:22:56.824745 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:22:56.825373 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:22:56.825407 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:22:56.825549 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:22:56.825566 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:22:56.825674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:22:56.825697 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:22:56.825856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:22:56.825877 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:22:56.826014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:22:56.826036 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:22:56.829341 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:22:56.829452 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:22:56.829487 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:22:56.829614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:22:56.829636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:22:56.832696 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:22:56.832778 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:22:56.854282 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:22:56.854351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:22:56.854834 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:22:56.854951 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:22:56.854982 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:22:56.858409 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:22:56.869471 systemd[1]: Switching root. Feb 13 20:22:56.900881 systemd-journald[215]: Journal stopped Feb 13 20:22:52.742160 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:22:52.742178 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:22:52.742184 kernel: Disabled fast string operations Feb 13 20:22:52.742188 kernel: BIOS-provided physical RAM map: Feb 13 20:22:52.742192 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 13 20:22:52.742206 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 13 20:22:52.742213 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Feb 13 20:22:52.742218 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Feb 13 20:22:52.742222 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Feb 13 20:22:52.742226 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Feb 13 20:22:52.742230 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Feb 13 20:22:52.742234 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Feb 13 20:22:52.742238 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Feb 13 20:22:52.742243 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 20:22:52.742249 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Feb 13 20:22:52.742254 kernel: NX (Execute Disable) protection: active Feb 13 20:22:52.742259 kernel: APIC: Static calls initialized Feb 13 20:22:52.742264 kernel: SMBIOS 2.7 present. Feb 13 20:22:52.742269 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Feb 13 20:22:52.742273 kernel: vmware: hypercall mode: 0x00 Feb 13 20:22:52.742278 kernel: Hypervisor detected: VMware Feb 13 20:22:52.742283 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Feb 13 20:22:52.742288 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Feb 13 20:22:52.742293 kernel: vmware: using clock offset of 2633914133 ns Feb 13 20:22:52.742298 kernel: tsc: Detected 3408.000 MHz processor Feb 13 20:22:52.742303 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:22:52.742308 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:22:52.742313 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Feb 13 20:22:52.742318 kernel: total RAM covered: 3072M Feb 13 20:22:52.742322 kernel: Found optimal setting for mtrr clean up Feb 13 20:22:52.742328 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Feb 13 20:22:52.742334 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Feb 13 20:22:52.742339 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:22:52.742344 kernel: Using GB pages for direct mapping Feb 13 20:22:52.742349 kernel: ACPI: Early table checksum verification disabled Feb 13 20:22:52.742353 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Feb 13 20:22:52.742358 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Feb 13 20:22:52.742363 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Feb 13 20:22:52.742368 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Feb 13 20:22:52.742373 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 13 20:22:52.742381 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 13 20:22:52.742386 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Feb 13 20:22:52.742391 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Feb 13 20:22:52.742396 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Feb 13 20:22:52.742402 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Feb 13 20:22:52.742408 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Feb 13 20:22:52.742413 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Feb 13 20:22:52.742418 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Feb 13 20:22:52.742423 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Feb 13 20:22:52.742428 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 13 20:22:52.742433 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 13 20:22:52.742438 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Feb 13 20:22:52.742444 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Feb 13 20:22:52.742449 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Feb 13 20:22:52.742454 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Feb 13 20:22:52.742460 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Feb 13 20:22:52.742465 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Feb 13 20:22:52.742470 kernel: system APIC only can use physical flat Feb 13 20:22:52.742475 kernel: APIC: Switched APIC routing to: physical flat Feb 13 20:22:52.742480 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:22:52.742485 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 20:22:52.742490 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 20:22:52.742495 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 20:22:52.742500 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 20:22:52.742507 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 20:22:52.742512 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 20:22:52.742517 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 20:22:52.742522 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Feb 13 20:22:52.742527 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Feb 13 20:22:52.742532 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Feb 13 20:22:52.742537 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Feb 13 20:22:52.742542 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Feb 13 20:22:52.742547 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Feb 13 20:22:52.742552 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Feb 13 20:22:52.742557 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Feb 13 20:22:52.742563 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Feb 13 20:22:52.742568 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Feb 13 20:22:52.742573 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Feb 13 20:22:52.742579 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Feb 13 20:22:52.742584 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Feb 13 20:22:52.742588 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Feb 13 20:22:52.742594 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Feb 13 20:22:52.742599 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Feb 13 20:22:52.742604 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Feb 13 20:22:52.742609 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Feb 13 20:22:52.742615 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Feb 13 20:22:52.742620 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Feb 13 20:22:52.742625 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Feb 13 20:22:52.742631 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Feb 13 20:22:52.742636 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Feb 13 20:22:52.742641 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Feb 13 20:22:52.742646 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Feb 13 20:22:52.742651 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Feb 13 20:22:52.742656 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Feb 13 20:22:52.742661 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Feb 13 20:22:52.742667 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Feb 13 20:22:52.742672 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Feb 13 20:22:52.742677 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Feb 13 20:22:52.742682 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Feb 13 20:22:52.742687 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Feb 13 20:22:52.742692 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Feb 13 20:22:52.742697 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Feb 13 20:22:52.742702 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Feb 13 20:22:52.742707 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Feb 13 20:22:52.742712 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Feb 13 20:22:52.742719 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Feb 13 20:22:52.742723 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Feb 13 20:22:52.742728 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Feb 13 20:22:52.742733 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Feb 13 20:22:52.742738 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Feb 13 20:22:52.742743 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Feb 13 20:22:52.742748 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Feb 13 20:22:52.742754 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Feb 13 20:22:52.742758 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Feb 13 20:22:52.742764 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Feb 13 20:22:52.742770 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Feb 13 20:22:52.742775 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Feb 13 20:22:52.742780 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Feb 13 20:22:52.742790 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Feb 13 20:22:52.742796 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Feb 13 20:22:52.742802 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Feb 13 20:22:52.742807 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Feb 13 20:22:52.742812 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Feb 13 20:22:52.742818 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Feb 13 20:22:52.742824 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Feb 13 20:22:52.742829 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Feb 13 20:22:52.742845 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Feb 13 20:22:52.742852 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Feb 13 20:22:52.742860 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Feb 13 20:22:52.742869 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Feb 13 20:22:52.742874 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Feb 13 20:22:52.742880 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Feb 13 20:22:52.742885 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Feb 13 20:22:52.742891 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Feb 13 20:22:52.742898 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Feb 13 20:22:52.742907 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Feb 13 20:22:52.742913 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Feb 13 20:22:52.742919 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Feb 13 20:22:52.742924 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Feb 13 20:22:52.742930 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Feb 13 20:22:52.742935 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Feb 13 20:22:52.742940 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Feb 13 20:22:52.742946 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Feb 13 20:22:52.742951 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Feb 13 20:22:52.742958 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Feb 13 20:22:52.742963 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Feb 13 20:22:52.742969 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Feb 13 20:22:52.742974 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Feb 13 20:22:52.742979 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Feb 13 20:22:52.742985 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Feb 13 20:22:52.742990 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Feb 13 20:22:52.742995 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Feb 13 20:22:52.743001 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Feb 13 20:22:52.743006 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Feb 13 20:22:52.743012 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Feb 13 20:22:52.743018 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Feb 13 20:22:52.743023 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Feb 13 20:22:52.743029 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Feb 13 20:22:52.743034 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Feb 13 20:22:52.743039 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Feb 13 20:22:52.743044 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Feb 13 20:22:52.743049 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Feb 13 20:22:52.743055 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Feb 13 20:22:52.743060 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Feb 13 20:22:52.743067 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Feb 13 20:22:52.743072 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Feb 13 20:22:52.743077 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Feb 13 20:22:52.743083 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Feb 13 20:22:52.743088 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Feb 13 20:22:52.743093 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Feb 13 20:22:52.743099 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Feb 13 20:22:52.743104 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Feb 13 20:22:52.743109 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Feb 13 20:22:52.743114 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Feb 13 20:22:52.743120 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Feb 13 20:22:52.743126 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Feb 13 20:22:52.743132 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Feb 13 20:22:52.743137 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Feb 13 20:22:52.743142 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Feb 13 20:22:52.743147 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Feb 13 20:22:52.743153 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Feb 13 20:22:52.743159 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Feb 13 20:22:52.743164 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Feb 13 20:22:52.743169 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Feb 13 20:22:52.743175 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Feb 13 20:22:52.743181 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Feb 13 20:22:52.743187 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Feb 13 20:22:52.743192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:22:52.745311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:22:52.745323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Feb 13 20:22:52.745329 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Feb 13 20:22:52.745335 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Feb 13 20:22:52.745340 kernel: Zone ranges: Feb 13 20:22:52.745346 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:22:52.745355 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Feb 13 20:22:52.745360 kernel: Normal empty Feb 13 20:22:52.745366 kernel: Movable zone start for each node Feb 13 20:22:52.745371 kernel: Early memory node ranges Feb 13 20:22:52.745377 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Feb 13 20:22:52.745382 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Feb 13 20:22:52.745388 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Feb 13 20:22:52.745394 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Feb 13 20:22:52.745399 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:22:52.745405 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Feb 13 20:22:52.745411 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Feb 13 20:22:52.745417 kernel: ACPI: PM-Timer IO Port: 0x1008 Feb 13 20:22:52.745422 kernel: system APIC only can use physical flat Feb 13 20:22:52.745428 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Feb 13 20:22:52.745434 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 20:22:52.745439 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 20:22:52.745444 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 20:22:52.745450 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 20:22:52.745455 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 20:22:52.745462 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 20:22:52.745467 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 20:22:52.745473 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 20:22:52.745478 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 20:22:52.745483 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 20:22:52.745489 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 20:22:52.745494 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 20:22:52.745499 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 20:22:52.745505 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 20:22:52.745510 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 20:22:52.745517 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 20:22:52.745522 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Feb 13 20:22:52.745527 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Feb 13 20:22:52.745533 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Feb 13 20:22:52.745538 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Feb 13 20:22:52.745543 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Feb 13 20:22:52.745549 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Feb 13 20:22:52.745554 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Feb 13 20:22:52.745560 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Feb 13 20:22:52.745565 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Feb 13 20:22:52.745572 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Feb 13 20:22:52.745577 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Feb 13 20:22:52.745582 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Feb 13 20:22:52.745588 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Feb 13 20:22:52.745594 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Feb 13 20:22:52.745599 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Feb 13 20:22:52.745604 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Feb 13 20:22:52.745610 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Feb 13 20:22:52.745615 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Feb 13 20:22:52.745621 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Feb 13 20:22:52.745627 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Feb 13 20:22:52.745633 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Feb 13 20:22:52.745638 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Feb 13 20:22:52.745644 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Feb 13 20:22:52.745649 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Feb 13 20:22:52.745654 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Feb 13 20:22:52.745660 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Feb 13 20:22:52.745665 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Feb 13 20:22:52.745670 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Feb 13 20:22:52.745677 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Feb 13 20:22:52.745682 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Feb 13 20:22:52.745688 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Feb 13 20:22:52.745693 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Feb 13 20:22:52.745698 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Feb 13 20:22:52.745704 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Feb 13 20:22:52.745709 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Feb 13 20:22:52.745714 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Feb 13 20:22:52.745720 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Feb 13 20:22:52.745727 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Feb 13 20:22:52.745732 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Feb 13 20:22:52.745738 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Feb 13 20:22:52.745743 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Feb 13 20:22:52.745749 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Feb 13 20:22:52.745754 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Feb 13 20:22:52.745760 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Feb 13 20:22:52.745765 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Feb 13 20:22:52.745770 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Feb 13 20:22:52.745776 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Feb 13 20:22:52.745782 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Feb 13 20:22:52.745787 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Feb 13 20:22:52.745793 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Feb 13 20:22:52.745798 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Feb 13 20:22:52.745803 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Feb 13 20:22:52.745809 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Feb 13 20:22:52.745814 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Feb 13 20:22:52.745820 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Feb 13 20:22:52.745825 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Feb 13 20:22:52.745830 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Feb 13 20:22:52.745837 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Feb 13 20:22:52.745842 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Feb 13 20:22:52.745847 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Feb 13 20:22:52.745853 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Feb 13 20:22:52.745858 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Feb 13 20:22:52.745864 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Feb 13 20:22:52.745869 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Feb 13 20:22:52.745874 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Feb 13 20:22:52.745880 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Feb 13 20:22:52.745886 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Feb 13 20:22:52.745891 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Feb 13 20:22:52.745897 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Feb 13 20:22:52.745902 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Feb 13 20:22:52.745908 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Feb 13 20:22:52.745913 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Feb 13 20:22:52.745918 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Feb 13 20:22:52.745924 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Feb 13 20:22:52.745929 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Feb 13 20:22:52.745934 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Feb 13 20:22:52.745941 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Feb 13 20:22:52.745946 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Feb 13 20:22:52.745951 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Feb 13 20:22:52.745957 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Feb 13 20:22:52.745962 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Feb 13 20:22:52.745968 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Feb 13 20:22:52.745973 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Feb 13 20:22:52.745978 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Feb 13 20:22:52.745984 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Feb 13 20:22:52.745989 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Feb 13 20:22:52.745995 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Feb 13 20:22:52.746000 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Feb 13 20:22:52.746006 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Feb 13 20:22:52.746011 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Feb 13 20:22:52.746016 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Feb 13 20:22:52.746022 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Feb 13 20:22:52.746027 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Feb 13 20:22:52.746032 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Feb 13 20:22:52.746038 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Feb 13 20:22:52.746043 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Feb 13 20:22:52.746050 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Feb 13 20:22:52.746055 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Feb 13 20:22:52.746060 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Feb 13 20:22:52.746065 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Feb 13 20:22:52.746071 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Feb 13 20:22:52.746076 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Feb 13 20:22:52.746082 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Feb 13 20:22:52.746087 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Feb 13 20:22:52.746093 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Feb 13 20:22:52.746099 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Feb 13 20:22:52.746104 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Feb 13 20:22:52.746110 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Feb 13 20:22:52.746115 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Feb 13 20:22:52.746121 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Feb 13 20:22:52.746126 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Feb 13 20:22:52.746131 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:22:52.746137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Feb 13 20:22:52.746142 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:22:52.746148 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Feb 13 20:22:52.746154 kernel: TSC deadline timer available Feb 13 20:22:52.746160 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Feb 13 20:22:52.746165 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Feb 13 20:22:52.746171 kernel: Booting paravirtualized kernel on VMware hypervisor Feb 13 20:22:52.746176 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:22:52.746182 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Feb 13 20:22:52.746187 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 20:22:52.746193 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 20:22:52.748220 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Feb 13 20:22:52.748230 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Feb 13 20:22:52.748236 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Feb 13 20:22:52.748241 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Feb 13 20:22:52.748247 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Feb 13 20:22:52.748261 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Feb 13 20:22:52.748268 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Feb 13 20:22:52.748275 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Feb 13 20:22:52.748280 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Feb 13 20:22:52.748287 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Feb 13 20:22:52.748293 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Feb 13 20:22:52.748298 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Feb 13 20:22:52.748304 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Feb 13 20:22:52.748310 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Feb 13 20:22:52.748316 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Feb 13 20:22:52.748322 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Feb 13 20:22:52.748328 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:22:52.748336 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:22:52.748342 kernel: random: crng init done Feb 13 20:22:52.748348 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 13 20:22:52.748354 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Feb 13 20:22:52.748360 kernel: printk: log_buf_len min size: 262144 bytes Feb 13 20:22:52.748365 kernel: printk: log_buf_len: 1048576 bytes Feb 13 20:22:52.748372 kernel: printk: early log buf free: 239648(91%) Feb 13 20:22:52.748378 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:22:52.748383 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:22:52.748390 kernel: Fallback order for Node 0: 0 Feb 13 20:22:52.748396 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Feb 13 20:22:52.748402 kernel: Policy zone: DMA32 Feb 13 20:22:52.748408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:22:52.748415 kernel: Memory: 1936376K/2096628K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 159992K reserved, 0K cma-reserved) Feb 13 20:22:52.748422 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Feb 13 20:22:52.748429 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:22:52.748435 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:22:52.748442 kernel: Dynamic Preempt: voluntary Feb 13 20:22:52.748448 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:22:52.748455 kernel: rcu: RCU event tracing is enabled. Feb 13 20:22:52.748461 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Feb 13 20:22:52.748467 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:22:52.748472 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:22:52.748478 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:22:52.748485 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:22:52.748491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Feb 13 20:22:52.748497 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Feb 13 20:22:52.748503 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Feb 13 20:22:52.748509 kernel: Console: colour VGA+ 80x25 Feb 13 20:22:52.748514 kernel: printk: console [tty0] enabled Feb 13 20:22:52.748520 kernel: printk: console [ttyS0] enabled Feb 13 20:22:52.748526 kernel: ACPI: Core revision 20230628 Feb 13 20:22:52.748532 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Feb 13 20:22:52.748540 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:22:52.748546 kernel: x2apic enabled Feb 13 20:22:52.748552 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:22:52.748558 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:22:52.748564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 13 20:22:52.748570 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Feb 13 20:22:52.748576 kernel: Disabled fast string operations Feb 13 20:22:52.748582 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:22:52.748588 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:22:52.748595 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:22:52.748601 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:22:52.748607 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:22:52.748613 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 20:22:52.748619 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:22:52.748625 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 20:22:52.748631 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 20:22:52.748637 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:22:52.748643 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:22:52.748650 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:22:52.748656 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 20:22:52.748662 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 20:22:52.748668 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:22:52.748674 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:22:52.748680 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:22:52.748685 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:22:52.748691 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 20:22:52.748697 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:22:52.748704 kernel: pid_max: default: 131072 minimum: 1024 Feb 13 20:22:52.748710 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:22:52.748716 kernel: landlock: Up and running. Feb 13 20:22:52.748722 kernel: SELinux: Initializing. Feb 13 20:22:52.748727 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:22:52.748734 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:22:52.748739 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 20:22:52.748745 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Feb 13 20:22:52.748752 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Feb 13 20:22:52.748759 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Feb 13 20:22:52.748765 kernel: Performance Events: Skylake events, core PMU driver. Feb 13 20:22:52.748771 kernel: core: CPUID marked event: 'cpu cycles' unavailable Feb 13 20:22:52.748778 kernel: core: CPUID marked event: 'instructions' unavailable Feb 13 20:22:52.748784 kernel: core: CPUID marked event: 'bus cycles' unavailable Feb 13 20:22:52.748789 kernel: core: CPUID marked event: 'cache references' unavailable Feb 13 20:22:52.748795 kernel: core: CPUID marked event: 'cache misses' unavailable Feb 13 20:22:52.748802 kernel: core: CPUID marked event: 'branch instructions' unavailable Feb 13 20:22:52.748808 kernel: core: CPUID marked event: 'branch misses' unavailable Feb 13 20:22:52.748814 kernel: ... version: 1 Feb 13 20:22:52.748820 kernel: ... bit width: 48 Feb 13 20:22:52.748826 kernel: ... generic registers: 4 Feb 13 20:22:52.748832 kernel: ... value mask: 0000ffffffffffff Feb 13 20:22:52.748838 kernel: ... max period: 000000007fffffff Feb 13 20:22:52.748843 kernel: ... fixed-purpose events: 0 Feb 13 20:22:52.748849 kernel: ... event mask: 000000000000000f Feb 13 20:22:52.748855 kernel: signal: max sigframe size: 1776 Feb 13 20:22:52.748862 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:22:52.748868 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:22:52.748874 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:22:52.748880 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:22:52.748886 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:22:52.748892 kernel: .... node #0, CPUs: #1 Feb 13 20:22:52.748898 kernel: Disabled fast string operations Feb 13 20:22:52.748903 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Feb 13 20:22:52.748909 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 20:22:52.748915 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:22:52.748922 kernel: smpboot: Max logical packages: 128 Feb 13 20:22:52.748928 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Feb 13 20:22:52.748933 kernel: devtmpfs: initialized Feb 13 20:22:52.748939 kernel: x86/mm: Memory block size: 128MB Feb 13 20:22:52.748945 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Feb 13 20:22:52.748951 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:22:52.748957 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 13 20:22:52.748963 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:22:52.748969 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:22:52.748976 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:22:52.748982 kernel: audit: type=2000 audit(1739478171.068:1): state=initialized audit_enabled=0 res=1 Feb 13 20:22:52.748988 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:22:52.748994 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:22:52.749000 kernel: cpuidle: using governor menu Feb 13 20:22:52.749006 kernel: Simple Boot Flag at 0x36 set to 0x80 Feb 13 20:22:52.749012 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:22:52.749017 kernel: dca service started, version 1.12.1 Feb 13 20:22:52.749023 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Feb 13 20:22:52.749030 kernel: PCI: Using configuration type 1 for base access Feb 13 20:22:52.749036 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:22:52.749042 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:22:52.749048 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:22:52.749054 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:22:52.749059 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:22:52.749065 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:22:52.749071 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:22:52.749077 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:22:52.749084 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:22:52.749090 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:22:52.749096 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 13 20:22:52.749102 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:22:52.749107 kernel: ACPI: Interpreter enabled Feb 13 20:22:52.749113 kernel: ACPI: PM: (supports S0 S1 S5) Feb 13 20:22:52.749119 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:22:52.749125 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:22:52.749131 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:22:52.749138 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Feb 13 20:22:52.749144 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Feb 13 20:22:52.750030 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:22:52.750094 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Feb 13 20:22:52.750146 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Feb 13 20:22:52.750155 kernel: PCI host bridge to bus 0000:00 Feb 13 20:22:52.750316 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:22:52.750369 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Feb 13 20:22:52.750414 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 20:22:52.750458 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:22:52.750502 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Feb 13 20:22:52.750549 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Feb 13 20:22:52.750610 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Feb 13 20:22:52.750670 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Feb 13 20:22:52.750730 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Feb 13 20:22:52.750784 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Feb 13 20:22:52.750835 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Feb 13 20:22:52.750885 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:22:52.750935 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:22:52.750986 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:22:52.751038 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:22:52.751093 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Feb 13 20:22:52.751143 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Feb 13 20:22:52.751192 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Feb 13 20:22:52.753757 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Feb 13 20:22:52.753816 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Feb 13 20:22:52.753873 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Feb 13 20:22:52.753927 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Feb 13 20:22:52.753977 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Feb 13 20:22:52.754026 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Feb 13 20:22:52.754075 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Feb 13 20:22:52.754124 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Feb 13 20:22:52.754173 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:22:52.754254 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Feb 13 20:22:52.754310 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754361 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754416 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754467 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754524 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754578 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754634 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754684 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754738 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754788 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754844 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.754896 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.754950 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755000 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755054 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755103 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755156 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755232 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755289 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755338 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755391 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755441 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755496 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755550 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755604 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755655 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755708 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755758 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755811 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755864 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.755916 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.755967 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.756021 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.756070 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.756124 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.756177 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759272 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759338 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759401 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759453 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759507 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759569 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759628 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759699 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759765 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759818 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759873 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.759925 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.759983 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.760034 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.760088 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.760139 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.760194 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762264 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762327 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762378 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762434 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762484 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762540 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762591 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762649 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762699 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762753 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Feb 13 20:22:52.762803 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.762858 kernel: pci_bus 0000:01: extended config space not accessible Feb 13 20:22:52.762912 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:22:52.763025 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 20:22:52.763070 kernel: acpiphp: Slot [32] registered Feb 13 20:22:52.763098 kernel: acpiphp: Slot [33] registered Feb 13 20:22:52.763126 kernel: acpiphp: Slot [34] registered Feb 13 20:22:52.763149 kernel: acpiphp: Slot [35] registered Feb 13 20:22:52.763174 kernel: acpiphp: Slot [36] registered Feb 13 20:22:52.763248 kernel: acpiphp: Slot [37] registered Feb 13 20:22:52.763275 kernel: acpiphp: Slot [38] registered Feb 13 20:22:52.763300 kernel: acpiphp: Slot [39] registered Feb 13 20:22:52.763322 kernel: acpiphp: Slot [40] registered Feb 13 20:22:52.763331 kernel: acpiphp: Slot [41] registered Feb 13 20:22:52.763337 kernel: acpiphp: Slot [42] registered Feb 13 20:22:52.763343 kernel: acpiphp: Slot [43] registered Feb 13 20:22:52.763349 kernel: acpiphp: Slot [44] registered Feb 13 20:22:52.763355 kernel: acpiphp: Slot [45] registered Feb 13 20:22:52.763361 kernel: acpiphp: Slot [46] registered Feb 13 20:22:52.763367 kernel: acpiphp: Slot [47] registered Feb 13 20:22:52.763373 kernel: acpiphp: Slot [48] registered Feb 13 20:22:52.763378 kernel: acpiphp: Slot [49] registered Feb 13 20:22:52.763386 kernel: acpiphp: Slot [50] registered Feb 13 20:22:52.763391 kernel: acpiphp: Slot [51] registered Feb 13 20:22:52.763397 kernel: acpiphp: Slot [52] registered Feb 13 20:22:52.763403 kernel: acpiphp: Slot [53] registered Feb 13 20:22:52.763409 kernel: acpiphp: Slot [54] registered Feb 13 20:22:52.763415 kernel: acpiphp: Slot [55] registered Feb 13 20:22:52.763421 kernel: acpiphp: Slot [56] registered Feb 13 20:22:52.763427 kernel: acpiphp: Slot [57] registered Feb 13 20:22:52.763432 kernel: acpiphp: Slot [58] registered Feb 13 20:22:52.763438 kernel: acpiphp: Slot [59] registered Feb 13 20:22:52.763445 kernel: acpiphp: Slot [60] registered Feb 13 20:22:52.763451 kernel: acpiphp: Slot [61] registered Feb 13 20:22:52.763461 kernel: acpiphp: Slot [62] registered Feb 13 20:22:52.763467 kernel: acpiphp: Slot [63] registered Feb 13 20:22:52.763530 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Feb 13 20:22:52.763582 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 13 20:22:52.763630 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 13 20:22:52.763678 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 13 20:22:52.763729 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Feb 13 20:22:52.763778 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Feb 13 20:22:52.763826 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Feb 13 20:22:52.763875 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Feb 13 20:22:52.763923 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Feb 13 20:22:52.763979 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Feb 13 20:22:52.764030 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Feb 13 20:22:52.764085 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Feb 13 20:22:52.764136 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 13 20:22:52.764186 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:22:52.765050 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 13 20:22:52.765113 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 13 20:22:52.765168 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 13 20:22:52.765230 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 13 20:22:52.765283 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 13 20:22:52.765341 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 13 20:22:52.765390 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 13 20:22:52.765439 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 13 20:22:52.765490 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 13 20:22:52.765539 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 13 20:22:52.765587 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 13 20:22:52.765636 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 13 20:22:52.765689 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 13 20:22:52.765737 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 13 20:22:52.765787 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 13 20:22:52.765838 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 13 20:22:52.765886 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 13 20:22:52.765935 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 13 20:22:52.765988 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 13 20:22:52.766036 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 13 20:22:52.766085 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 13 20:22:52.766135 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 13 20:22:52.766184 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 13 20:22:52.766242 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 13 20:22:52.766295 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 13 20:22:52.766344 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 13 20:22:52.766393 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 13 20:22:52.766450 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Feb 13 20:22:52.766502 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Feb 13 20:22:52.766552 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Feb 13 20:22:52.766603 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Feb 13 20:22:52.766652 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Feb 13 20:22:52.766704 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 13 20:22:52.766755 kernel: pci 0000:0b:00.0: supports D1 D2 Feb 13 20:22:52.766805 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:22:52.766856 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 13 20:22:52.766906 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 13 20:22:52.766954 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 13 20:22:52.767004 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 13 20:22:52.767054 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 13 20:22:52.767106 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 13 20:22:52.767155 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 13 20:22:52.767759 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 13 20:22:52.767814 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 13 20:22:52.767862 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 13 20:22:52.767909 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 13 20:22:52.767957 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 13 20:22:52.768010 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 13 20:22:52.768058 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 13 20:22:52.768125 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 13 20:22:52.768223 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 13 20:22:52.768292 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 13 20:22:52.768345 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 13 20:22:52.768396 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 13 20:22:52.768444 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 13 20:22:52.768495 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 13 20:22:52.768545 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 13 20:22:52.768593 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 13 20:22:52.768641 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 13 20:22:52.768690 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 13 20:22:52.768738 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 13 20:22:52.768786 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 13 20:22:52.768835 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 13 20:22:52.768885 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 13 20:22:52.768933 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 13 20:22:52.768981 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 13 20:22:52.769031 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 13 20:22:52.769080 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 13 20:22:52.769147 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 13 20:22:52.769201 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 13 20:22:52.769254 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 13 20:22:52.769306 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 13 20:22:52.769361 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 13 20:22:52.769425 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 13 20:22:52.769474 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 13 20:22:52.769521 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 13 20:22:52.769569 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 13 20:22:52.769618 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 13 20:22:52.769734 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 13 20:22:52.769785 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 13 20:22:52.769834 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 13 20:22:52.769882 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 13 20:22:52.769930 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 13 20:22:52.769980 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 13 20:22:52.770029 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 13 20:22:52.770077 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 13 20:22:52.770126 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 13 20:22:52.770178 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 13 20:22:52.770266 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 13 20:22:52.770317 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 13 20:22:52.770365 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 13 20:22:52.770413 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 13 20:22:52.770461 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 13 20:22:52.770511 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 13 20:22:52.770561 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 13 20:22:52.770609 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 13 20:22:52.770657 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 13 20:22:52.770704 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 13 20:22:52.770752 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 13 20:22:52.770800 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 13 20:22:52.770848 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 13 20:22:52.770895 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 13 20:22:52.770945 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 13 20:22:52.770993 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 13 20:22:52.771041 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 13 20:22:52.771090 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 13 20:22:52.771140 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 13 20:22:52.771187 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 13 20:22:52.771319 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 13 20:22:52.771372 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 13 20:22:52.771424 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 13 20:22:52.771472 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 13 20:22:52.771521 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 13 20:22:52.771569 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 13 20:22:52.771617 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 13 20:22:52.771625 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Feb 13 20:22:52.771632 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Feb 13 20:22:52.771638 kernel: ACPI: PCI: Interrupt link LNKB disabled Feb 13 20:22:52.771645 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:22:52.771651 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Feb 13 20:22:52.771657 kernel: iommu: Default domain type: Translated Feb 13 20:22:52.771663 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:22:52.771669 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:22:52.771675 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:22:52.771681 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Feb 13 20:22:52.771687 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Feb 13 20:22:52.771734 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Feb 13 20:22:52.771785 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Feb 13 20:22:52.771833 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:22:52.771841 kernel: vgaarb: loaded Feb 13 20:22:52.771847 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Feb 13 20:22:52.771853 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Feb 13 20:22:52.771859 kernel: clocksource: Switched to clocksource tsc-early Feb 13 20:22:52.771865 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:22:52.771871 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:22:52.771877 kernel: pnp: PnP ACPI init Feb 13 20:22:52.771930 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Feb 13 20:22:52.771975 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Feb 13 20:22:52.772019 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Feb 13 20:22:52.772066 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Feb 13 20:22:52.772112 kernel: pnp 00:06: [dma 2] Feb 13 20:22:52.772162 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Feb 13 20:22:52.772214 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Feb 13 20:22:52.772261 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Feb 13 20:22:52.772269 kernel: pnp: PnP ACPI: found 8 devices Feb 13 20:22:52.772275 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:22:52.772281 kernel: NET: Registered PF_INET protocol family Feb 13 20:22:52.772287 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:22:52.772293 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:22:52.772299 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:22:52.772305 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:22:52.772312 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:22:52.772322 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:22:52.772328 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:22:52.772333 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:22:52.772339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:22:52.772345 kernel: NET: Registered PF_XDP protocol family Feb 13 20:22:52.772394 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 13 20:22:52.772444 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 20:22:52.772496 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 20:22:52.772544 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 20:22:52.772592 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 20:22:52.772639 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Feb 13 20:22:52.772687 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Feb 13 20:22:52.772736 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Feb 13 20:22:52.772787 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Feb 13 20:22:52.772836 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Feb 13 20:22:52.772885 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Feb 13 20:22:52.772934 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Feb 13 20:22:52.772982 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Feb 13 20:22:52.773032 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Feb 13 20:22:52.773083 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Feb 13 20:22:52.773131 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Feb 13 20:22:52.773180 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Feb 13 20:22:52.773258 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Feb 13 20:22:52.773308 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Feb 13 20:22:52.773359 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Feb 13 20:22:52.773406 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Feb 13 20:22:52.773454 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Feb 13 20:22:52.773502 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Feb 13 20:22:52.773550 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Feb 13 20:22:52.773597 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Feb 13 20:22:52.773645 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.773697 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.773745 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.773792 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.773839 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.773886 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.773934 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.773982 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774030 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774080 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774130 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774178 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774239 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774289 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774337 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774386 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774433 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774485 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774533 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774580 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774628 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774676 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774723 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774771 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774820 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774870 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.774918 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.774966 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775014 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775062 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775110 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775158 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775252 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775304 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775361 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775409 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775456 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775504 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775552 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775600 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775648 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775699 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775747 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775795 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775842 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775889 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.775937 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.775985 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776032 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776079 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776149 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776203 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776273 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776324 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776372 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776420 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776469 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776517 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776567 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776615 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776666 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776714 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776763 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776811 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776859 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.776907 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.776956 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777004 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777052 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777103 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777168 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777261 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777310 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777356 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777404 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777451 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777497 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777545 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777592 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777643 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777690 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777739 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777787 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 13 20:22:52.777835 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 13 20:22:52.777884 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:22:52.777934 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Feb 13 20:22:52.777982 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 13 20:22:52.778028 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 13 20:22:52.778078 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 13 20:22:52.778163 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Feb 13 20:22:52.778224 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 13 20:22:52.778274 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 13 20:22:52.778330 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 13 20:22:52.778382 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Feb 13 20:22:52.778432 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 13 20:22:52.778481 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 13 20:22:52.778533 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 13 20:22:52.778581 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 13 20:22:52.778631 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 13 20:22:52.778679 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 13 20:22:52.778726 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 13 20:22:52.778774 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 13 20:22:52.778822 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 13 20:22:52.778870 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 13 20:22:52.778919 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 13 20:22:52.778966 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 13 20:22:52.779017 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 13 20:22:52.779065 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 13 20:22:52.779134 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 13 20:22:52.779183 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 13 20:22:52.779242 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 13 20:22:52.779294 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 13 20:22:52.779363 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 13 20:22:52.779415 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 13 20:22:52.779464 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 13 20:22:52.779513 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 13 20:22:52.779562 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 13 20:22:52.779614 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Feb 13 20:22:52.779664 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 13 20:22:52.779714 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 13 20:22:52.779764 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 13 20:22:52.779816 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Feb 13 20:22:52.779866 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 13 20:22:52.779916 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 13 20:22:52.779965 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 13 20:22:52.780014 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 13 20:22:52.780064 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 13 20:22:52.780113 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 13 20:22:52.780177 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 13 20:22:52.782304 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 13 20:22:52.782370 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 13 20:22:52.782422 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 13 20:22:52.782472 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 13 20:22:52.782521 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 13 20:22:52.782569 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 13 20:22:52.782617 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 13 20:22:52.782664 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 13 20:22:52.782713 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 13 20:22:52.782761 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 13 20:22:52.782811 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 13 20:22:52.782858 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 13 20:22:52.782906 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 13 20:22:52.782953 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 13 20:22:52.783001 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 13 20:22:52.783048 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 13 20:22:52.783117 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 13 20:22:52.783182 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 13 20:22:52.783290 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 13 20:22:52.783345 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 13 20:22:52.783399 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 13 20:22:52.783447 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 13 20:22:52.783495 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 13 20:22:52.783543 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 13 20:22:52.783592 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 13 20:22:52.783641 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 13 20:22:52.783689 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 13 20:22:52.783737 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 13 20:22:52.783786 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 13 20:22:52.783836 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 13 20:22:52.783885 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 13 20:22:52.783933 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 13 20:22:52.783981 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 13 20:22:52.784030 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 13 20:22:52.784078 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 13 20:22:52.784161 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 13 20:22:52.784677 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 13 20:22:52.784736 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 13 20:22:52.784788 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 13 20:22:52.784841 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 13 20:22:52.784890 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 13 20:22:52.784939 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 13 20:22:52.784987 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 13 20:22:52.785036 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 13 20:22:52.785085 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 13 20:22:52.785187 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 13 20:22:52.785265 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 13 20:22:52.785321 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 13 20:22:52.785379 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 13 20:22:52.785428 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 13 20:22:52.785476 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 13 20:22:52.785525 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 13 20:22:52.785573 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 13 20:22:52.785621 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 13 20:22:52.785670 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 13 20:22:52.785718 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 13 20:22:52.785766 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 13 20:22:52.785814 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 13 20:22:52.785865 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 13 20:22:52.785914 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 13 20:22:52.785963 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 13 20:22:52.786011 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 13 20:22:52.786059 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 13 20:22:52.786126 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 13 20:22:52.786190 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 13 20:22:52.786288 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 13 20:22:52.786357 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 13 20:22:52.786410 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 13 20:22:52.786459 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 13 20:22:52.786506 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Feb 13 20:22:52.786550 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Feb 13 20:22:52.786612 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Feb 13 20:22:52.786670 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Feb 13 20:22:52.786712 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Feb 13 20:22:52.786759 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Feb 13 20:22:52.786807 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Feb 13 20:22:52.786850 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 13 20:22:52.786893 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Feb 13 20:22:52.786938 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Feb 13 20:22:52.786982 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Feb 13 20:22:52.787026 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Feb 13 20:22:52.787070 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Feb 13 20:22:52.787138 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Feb 13 20:22:52.787213 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Feb 13 20:22:52.787262 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Feb 13 20:22:52.787310 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Feb 13 20:22:52.787355 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Feb 13 20:22:52.787400 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Feb 13 20:22:52.787449 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Feb 13 20:22:52.787497 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Feb 13 20:22:52.787541 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Feb 13 20:22:52.787588 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Feb 13 20:22:52.787633 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Feb 13 20:22:52.787683 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Feb 13 20:22:52.787728 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 13 20:22:52.787777 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Feb 13 20:22:52.787824 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Feb 13 20:22:52.787872 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Feb 13 20:22:52.787918 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Feb 13 20:22:52.787970 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Feb 13 20:22:52.788024 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Feb 13 20:22:52.788079 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Feb 13 20:22:52.788159 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Feb 13 20:22:52.788474 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Feb 13 20:22:52.788530 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Feb 13 20:22:52.788576 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Feb 13 20:22:52.788621 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Feb 13 20:22:52.788670 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Feb 13 20:22:52.788719 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Feb 13 20:22:52.788767 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Feb 13 20:22:52.788817 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Feb 13 20:22:52.788862 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 13 20:22:52.788910 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Feb 13 20:22:52.789471 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 13 20:22:52.789536 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Feb 13 20:22:52.789588 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Feb 13 20:22:52.789638 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Feb 13 20:22:52.789684 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Feb 13 20:22:52.789732 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Feb 13 20:22:52.789777 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 13 20:22:52.789828 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Feb 13 20:22:52.789875 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Feb 13 20:22:52.789919 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 13 20:22:52.789967 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Feb 13 20:22:52.790013 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Feb 13 20:22:52.790057 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Feb 13 20:22:52.790124 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Feb 13 20:22:52.790188 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Feb 13 20:22:52.790410 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Feb 13 20:22:52.790461 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Feb 13 20:22:52.790507 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 13 20:22:52.790556 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Feb 13 20:22:52.790601 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 13 20:22:52.790650 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Feb 13 20:22:52.790698 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Feb 13 20:22:52.790746 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Feb 13 20:22:52.790792 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Feb 13 20:22:52.790841 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Feb 13 20:22:52.790886 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 13 20:22:52.790940 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Feb 13 20:22:52.790992 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Feb 13 20:22:52.791038 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Feb 13 20:22:52.791094 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Feb 13 20:22:52.791139 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Feb 13 20:22:52.791642 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Feb 13 20:22:52.791699 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Feb 13 20:22:52.791750 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Feb 13 20:22:52.791801 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Feb 13 20:22:52.791847 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 13 20:22:52.791897 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Feb 13 20:22:52.791943 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Feb 13 20:22:52.791993 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Feb 13 20:22:52.792042 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Feb 13 20:22:52.792093 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Feb 13 20:22:52.792139 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Feb 13 20:22:52.792189 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Feb 13 20:22:52.792247 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 13 20:22:52.792302 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:22:52.792317 kernel: PCI: CLS 32 bytes, default 64 Feb 13 20:22:52.792325 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:22:52.792332 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 13 20:22:52.792338 kernel: clocksource: Switched to clocksource tsc Feb 13 20:22:52.792345 kernel: Initialise system trusted keyrings Feb 13 20:22:52.792351 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:22:52.792358 kernel: Key type asymmetric registered Feb 13 20:22:52.792364 kernel: Asymmetric key parser 'x509' registered Feb 13 20:22:52.792370 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:22:52.792378 kernel: io scheduler mq-deadline registered Feb 13 20:22:52.792384 kernel: io scheduler kyber registered Feb 13 20:22:52.792391 kernel: io scheduler bfq registered Feb 13 20:22:52.792445 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Feb 13 20:22:52.792497 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792548 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Feb 13 20:22:52.792598 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792649 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Feb 13 20:22:52.792699 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792752 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Feb 13 20:22:52.792803 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792854 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Feb 13 20:22:52.792903 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.792953 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Feb 13 20:22:52.793006 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.793055 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Feb 13 20:22:52.793105 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.793170 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Feb 13 20:22:52.793226 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.793281 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Feb 13 20:22:52.795470 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795530 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Feb 13 20:22:52.795581 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795632 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Feb 13 20:22:52.795683 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795732 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Feb 13 20:22:52.795785 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795835 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Feb 13 20:22:52.795884 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.795933 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Feb 13 20:22:52.795982 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796031 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Feb 13 20:22:52.796084 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796169 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Feb 13 20:22:52.796576 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796632 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Feb 13 20:22:52.796684 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796737 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Feb 13 20:22:52.796787 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796838 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Feb 13 20:22:52.796887 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.796936 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Feb 13 20:22:52.796985 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797035 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Feb 13 20:22:52.797104 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797170 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Feb 13 20:22:52.797232 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797283 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Feb 13 20:22:52.797333 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797385 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Feb 13 20:22:52.797434 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797483 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Feb 13 20:22:52.797532 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797581 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Feb 13 20:22:52.797629 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797679 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Feb 13 20:22:52.797731 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797780 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Feb 13 20:22:52.797828 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797878 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Feb 13 20:22:52.797927 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.797979 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Feb 13 20:22:52.798028 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.798077 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Feb 13 20:22:52.798146 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.798231 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Feb 13 20:22:52.798289 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 20:22:52.798298 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:22:52.798305 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:22:52.798312 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:22:52.798321 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Feb 13 20:22:52.798328 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:22:52.798334 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:22:52.798388 kernel: rtc_cmos 00:01: registered as rtc0 Feb 13 20:22:52.798438 kernel: rtc_cmos 00:01: setting system clock to 2025-02-13T20:22:52 UTC (1739478172) Feb 13 20:22:52.798483 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Feb 13 20:22:52.798491 kernel: intel_pstate: CPU model not supported Feb 13 20:22:52.798498 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:22:52.798504 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:22:52.798510 kernel: Segment Routing with IPv6 Feb 13 20:22:52.798517 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:22:52.798523 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:22:52.798531 kernel: Key type dns_resolver registered Feb 13 20:22:52.798537 kernel: IPI shorthand broadcast: enabled Feb 13 20:22:52.798544 kernel: sched_clock: Marking stable (906004910, 228238216)->(1193575914, -59332788) Feb 13 20:22:52.798550 kernel: registered taskstats version 1 Feb 13 20:22:52.798557 kernel: Loading compiled-in X.509 certificates Feb 13 20:22:52.798563 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:22:52.798569 kernel: Key type .fscrypt registered Feb 13 20:22:52.798575 kernel: Key type fscrypt-provisioning registered Feb 13 20:22:52.798581 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:22:52.798589 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:22:52.798596 kernel: ima: No architecture policies found Feb 13 20:22:52.798602 kernel: clk: Disabling unused clocks Feb 13 20:22:52.798608 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:22:52.798614 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:22:52.798621 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:22:52.798627 kernel: Run /init as init process Feb 13 20:22:52.798633 kernel: with arguments: Feb 13 20:22:52.798639 kernel: /init Feb 13 20:22:52.798647 kernel: with environment: Feb 13 20:22:52.798653 kernel: HOME=/ Feb 13 20:22:52.798659 kernel: TERM=linux Feb 13 20:22:52.798665 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:22:52.798672 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:22:52.798680 systemd[1]: Detected virtualization vmware. Feb 13 20:22:52.798687 systemd[1]: Detected architecture x86-64. Feb 13 20:22:52.798693 systemd[1]: Running in initrd. Feb 13 20:22:52.798701 systemd[1]: No hostname configured, using default hostname. Feb 13 20:22:52.798707 systemd[1]: Hostname set to . Feb 13 20:22:52.798714 systemd[1]: Initializing machine ID from random generator. Feb 13 20:22:52.798720 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:22:52.798726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:22:52.798733 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:22:52.798740 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:22:52.798747 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:22:52.798755 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:22:52.798761 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:22:52.798769 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:22:52.798776 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:22:52.798783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:22:52.798789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:22:52.798796 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:22:52.798804 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:22:52.798810 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:22:52.798817 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:22:52.798823 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:22:52.798830 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:22:52.798837 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:22:52.798843 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:22:52.798850 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:22:52.798857 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:22:52.798864 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:22:52.798871 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:22:52.798877 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:22:52.798884 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:22:52.798890 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:22:52.798898 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:22:52.798904 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:22:52.798911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:22:52.798919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:22:52.798925 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:22:52.798943 systemd-journald[215]: Collecting audit messages is disabled. Feb 13 20:22:52.798961 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:22:52.798969 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:22:52.798977 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:22:52.798983 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:22:52.798990 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:22:52.798998 kernel: Bridge firewalling registered Feb 13 20:22:52.799005 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:22:52.799012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:22:52.799020 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:22:52.799027 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:22:52.799033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:22:52.799040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:22:52.799047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:22:52.799053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:22:52.799063 systemd-journald[215]: Journal started Feb 13 20:22:52.799078 systemd-journald[215]: Runtime Journal (/run/log/journal/14aad04a7cba4066bbfd8810d7986c69) is 4.8M, max 38.6M, 33.8M free. Feb 13 20:22:52.742153 systemd-modules-load[216]: Inserted module 'overlay' Feb 13 20:22:52.799292 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:22:52.764705 systemd-modules-load[216]: Inserted module 'br_netfilter' Feb 13 20:22:52.806274 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:22:52.808056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:22:52.814328 dracut-cmdline[245]: dracut-dracut-053 Feb 13 20:22:52.813480 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:22:52.814704 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:22:52.816916 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:22:52.833493 systemd-resolved[258]: Positive Trust Anchors: Feb 13 20:22:52.833502 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:22:52.833524 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:22:52.835508 systemd-resolved[258]: Defaulting to hostname 'linux'. Feb 13 20:22:52.836077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:22:52.836223 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:22:52.858213 kernel: SCSI subsystem initialized Feb 13 20:22:52.864207 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:22:52.871218 kernel: iscsi: registered transport (tcp) Feb 13 20:22:52.884531 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:22:52.884568 kernel: QLogic iSCSI HBA Driver Feb 13 20:22:52.904047 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:22:52.910309 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:22:52.925962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:22:52.926018 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:22:52.926027 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:22:52.957253 kernel: raid6: avx2x4 gen() 51977 MB/s Feb 13 20:22:52.974243 kernel: raid6: avx2x2 gen() 52154 MB/s Feb 13 20:22:52.991441 kernel: raid6: avx2x1 gen() 44238 MB/s Feb 13 20:22:52.991484 kernel: raid6: using algorithm avx2x2 gen() 52154 MB/s Feb 13 20:22:53.009451 kernel: raid6: .... xor() 31415 MB/s, rmw enabled Feb 13 20:22:53.009496 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:22:53.023213 kernel: xor: automatically using best checksumming function avx Feb 13 20:22:53.126217 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:22:53.131722 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:22:53.136280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:22:53.144385 systemd-udevd[432]: Using default interface naming scheme 'v255'. Feb 13 20:22:53.147342 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:22:53.157426 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:22:53.164565 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Feb 13 20:22:53.179642 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:22:53.184289 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:22:53.252596 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:22:53.258374 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:22:53.265243 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:22:53.266023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:22:53.266854 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:22:53.267086 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:22:53.273314 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:22:53.281445 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:22:53.317209 kernel: VMware PVSCSI driver - version 1.0.7.0-k Feb 13 20:22:53.319207 kernel: libata version 3.00 loaded. Feb 13 20:22:53.322207 kernel: ata_piix 0000:00:07.1: version 2.13 Feb 13 20:22:53.333189 kernel: scsi host0: ata_piix Feb 13 20:22:53.333285 kernel: scsi host1: ata_piix Feb 13 20:22:53.333361 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Feb 13 20:22:53.333378 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Feb 13 20:22:53.333386 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Feb 13 20:22:53.334546 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Feb 13 20:22:53.334560 kernel: vmw_pvscsi: using 64bit dma Feb 13 20:22:53.334572 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Feb 13 20:22:53.335797 kernel: vmw_pvscsi: max_id: 16 Feb 13 20:22:53.335817 kernel: vmw_pvscsi: setting ring_pages to 8 Feb 13 20:22:53.338708 kernel: vmw_pvscsi: enabling reqCallThreshold Feb 13 20:22:53.338735 kernel: vmw_pvscsi: driver-based request coalescing enabled Feb 13 20:22:53.338751 kernel: vmw_pvscsi: using MSI-X Feb 13 20:22:53.341206 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Feb 13 20:22:53.345692 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 Feb 13 20:22:53.347581 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:22:53.347597 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Feb 13 20:22:53.353097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:22:53.353555 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:22:53.353943 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:22:53.354236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:22:53.354425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:22:53.354698 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:22:53.359372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:22:53.370459 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:22:53.374296 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:22:53.387526 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:22:53.485216 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Feb 13 20:22:53.504339 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Feb 13 20:22:53.511039 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Feb 13 20:22:53.514209 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:22:53.514233 kernel: AES CTR mode by8 optimization enabled Feb 13 20:22:53.527220 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Feb 13 20:22:53.537283 kernel: sd 2:0:0:0: [sda] Write Protect is off Feb 13 20:22:53.537364 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 Feb 13 20:22:53.537426 kernel: sd 2:0:0:0: [sda] Cache data unavailable Feb 13 20:22:53.537493 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through Feb 13 20:22:53.537554 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Feb 13 20:22:53.546160 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:22:53.546172 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:22:53.546180 kernel: sd 2:0:0:0: [sda] Attached SCSI disk Feb 13 20:22:53.546279 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 13 20:22:53.581210 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (485) Feb 13 20:22:53.584143 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Feb 13 20:22:53.589232 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (483) Feb 13 20:22:53.588990 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Feb 13 20:22:53.592040 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Feb 13 20:22:53.594219 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Feb 13 20:22:53.594511 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Feb 13 20:22:53.598512 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:22:53.623219 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:22:53.627209 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:22:54.630709 disk-uuid[587]: The operation has completed successfully. Feb 13 20:22:54.631356 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:22:54.676994 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:22:54.677081 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:22:54.682318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:22:54.684755 sh[604]: Success Feb 13 20:22:54.697227 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:22:54.776254 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:22:54.777425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:22:54.777755 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:22:54.794644 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:22:54.794683 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:22:54.794692 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:22:54.796583 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:22:54.796609 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:22:54.804225 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:22:54.805146 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:22:54.814373 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Feb 13 20:22:54.815846 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:22:54.843226 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:54.843269 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:22:54.843278 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:22:54.848212 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:22:54.856174 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:22:54.857262 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:54.866738 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:22:54.871260 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:22:54.889421 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Feb 13 20:22:54.894277 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:22:54.949606 ignition[664]: Ignition 2.19.0 Feb 13 20:22:54.949613 ignition[664]: Stage: fetch-offline Feb 13 20:22:54.949633 ignition[664]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:54.949639 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:54.949694 ignition[664]: parsed url from cmdline: "" Feb 13 20:22:54.949696 ignition[664]: no config URL provided Feb 13 20:22:54.949699 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:22:54.949703 ignition[664]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:22:54.950060 ignition[664]: config successfully fetched Feb 13 20:22:54.950077 ignition[664]: parsing config with SHA512: f7f3e6b80c8fa390a3b629a0052a142e7b16264b82042650ccdbb5895c0f94dfc25f042ae1610eaed95f765132c90ffc360f50377bbc1de25558a87ceb59a35f Feb 13 20:22:54.954857 unknown[664]: fetched base config from "system" Feb 13 20:22:54.954864 unknown[664]: fetched user config from "vmware" Feb 13 20:22:54.955150 ignition[664]: fetch-offline: fetch-offline passed Feb 13 20:22:54.955192 ignition[664]: Ignition finished successfully Feb 13 20:22:54.956171 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:22:54.968055 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:22:54.972322 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:22:54.983971 systemd-networkd[799]: lo: Link UP Feb 13 20:22:54.984210 systemd-networkd[799]: lo: Gained carrier Feb 13 20:22:54.985009 systemd-networkd[799]: Enumeration completed Feb 13 20:22:54.985178 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:22:54.985450 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Feb 13 20:22:54.986648 systemd[1]: Reached target network.target - Network. Feb 13 20:22:54.987747 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:22:54.988832 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 13 20:22:54.988939 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 13 20:22:54.988897 systemd-networkd[799]: ens192: Link UP Feb 13 20:22:54.988900 systemd-networkd[799]: ens192: Gained carrier Feb 13 20:22:54.993537 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:22:55.001818 ignition[801]: Ignition 2.19.0 Feb 13 20:22:55.002064 ignition[801]: Stage: kargs Feb 13 20:22:55.002173 ignition[801]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:55.002180 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:55.002884 ignition[801]: kargs: kargs passed Feb 13 20:22:55.002910 ignition[801]: Ignition finished successfully Feb 13 20:22:55.004260 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:22:55.008317 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:22:55.015265 ignition[808]: Ignition 2.19.0 Feb 13 20:22:55.015272 ignition[808]: Stage: disks Feb 13 20:22:55.015387 ignition[808]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:55.015393 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:55.016385 ignition[808]: disks: disks passed Feb 13 20:22:55.016418 ignition[808]: Ignition finished successfully Feb 13 20:22:55.017399 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:22:55.017700 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:22:55.017908 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:22:55.018143 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:22:55.018352 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:22:55.018562 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:22:55.022328 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:22:55.033008 systemd-fsck[816]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:22:55.034648 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:22:55.037308 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:22:55.095209 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:22:55.095496 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:22:55.095866 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:22:55.107301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:22:55.108860 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:22:55.109130 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:22:55.109156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:22:55.109170 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:22:55.112559 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:22:55.113126 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:22:55.116317 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (824) Feb 13 20:22:55.118668 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:55.118688 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:22:55.118697 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:22:55.122216 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:22:55.123406 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:22:55.144252 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:22:55.146601 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:22:55.148704 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:22:55.150971 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:22:55.201622 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:22:55.205391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:22:55.206622 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:22:55.211207 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:55.226435 ignition[937]: INFO : Ignition 2.19.0 Feb 13 20:22:55.226435 ignition[937]: INFO : Stage: mount Feb 13 20:22:55.226806 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:55.226806 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:55.227422 ignition[937]: INFO : mount: mount passed Feb 13 20:22:55.227855 ignition[937]: INFO : Ignition finished successfully Feb 13 20:22:55.228113 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:22:55.232269 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:22:55.276168 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:22:55.295356 systemd-resolved[258]: Detected conflict on linux IN A 139.178.70.108 Feb 13 20:22:55.295366 systemd-resolved[258]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 13 20:22:55.793057 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:22:55.800417 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:22:55.809216 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (948) Feb 13 20:22:55.812229 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:22:55.812250 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:22:55.812266 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:22:55.817214 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:22:55.817851 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:22:55.835764 ignition[965]: INFO : Ignition 2.19.0 Feb 13 20:22:55.835764 ignition[965]: INFO : Stage: files Feb 13 20:22:55.836236 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:55.836236 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:55.836586 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:22:55.837337 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:22:55.837337 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:22:55.839710 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:22:55.839921 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:22:55.840127 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:22:55.840001 unknown[965]: wrote ssh authorized keys file for user: core Feb 13 20:22:55.841822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:22:55.842050 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:22:55.876040 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:22:55.957942 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:22:55.958227 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:22:55.959170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:22:56.339337 systemd-networkd[799]: ens192: Gained IPv6LL Feb 13 20:22:56.458733 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:22:56.674930 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:22:56.674930 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 13 20:22:56.675385 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 13 20:22:56.675385 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 20:22:56.675385 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 20:22:56.675857 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:22:56.714750 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:22:56.716959 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:22:56.717133 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:22:56.717133 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:22:56.717133 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:22:56.718106 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:22:56.718106 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:22:56.718106 ignition[965]: INFO : files: files passed Feb 13 20:22:56.718106 ignition[965]: INFO : Ignition finished successfully Feb 13 20:22:56.717841 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:22:56.722298 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:22:56.723375 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:22:56.724481 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:22:56.724660 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:22:56.729876 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:22:56.729876 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:22:56.730794 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:22:56.731945 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:22:56.732350 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:22:56.735361 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:22:56.747675 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:22:56.747732 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:22:56.748405 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:22:56.748689 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:22:56.748955 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:22:56.749663 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:22:56.759732 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:22:56.763296 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:22:56.768698 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:22:56.768989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:22:56.769321 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:22:56.769590 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:22:56.769658 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:22:56.770185 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:22:56.770496 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:22:56.770786 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:22:56.771266 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:22:56.771571 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:22:56.771846 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:22:56.772150 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:22:56.772474 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:22:56.772631 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:22:56.773019 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:22:56.773286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:22:56.773356 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:22:56.773847 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:22:56.774006 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:22:56.774261 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:22:56.774321 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:22:56.774605 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:22:56.774672 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:22:56.775153 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:22:56.775246 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:22:56.775565 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:22:56.775943 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:22:56.775992 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:22:56.776162 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:22:56.776959 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:22:56.777101 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:22:56.777160 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:22:56.777318 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:22:56.777369 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:22:56.777527 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:22:56.777588 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:22:56.777766 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:22:56.777824 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:22:56.785430 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:22:56.785530 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:22:56.785601 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:22:56.787344 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:22:56.787444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:22:56.787639 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:22:56.788085 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:22:56.788169 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:22:56.791861 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:22:56.791915 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:22:56.793601 ignition[1020]: INFO : Ignition 2.19.0 Feb 13 20:22:56.796555 ignition[1020]: INFO : Stage: umount Feb 13 20:22:56.796555 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:22:56.796555 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 20:22:56.796555 ignition[1020]: INFO : umount: umount passed Feb 13 20:22:56.796555 ignition[1020]: INFO : Ignition finished successfully Feb 13 20:22:56.797439 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:22:56.797516 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:22:56.797743 systemd[1]: Stopped target network.target - Network. Feb 13 20:22:56.797828 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:22:56.797856 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:22:56.797958 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:22:56.797979 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:22:56.798077 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:22:56.798099 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:22:56.798207 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:22:56.798231 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:22:56.798401 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:22:56.798883 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:22:56.804001 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:22:56.804086 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:22:56.806702 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:22:56.807115 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:22:56.807142 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:22:56.808843 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:22:56.808913 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:22:56.809650 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:22:56.809686 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:22:56.813278 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:22:56.813639 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:22:56.813680 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:22:56.813939 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Feb 13 20:22:56.813962 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Feb 13 20:22:56.814086 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:22:56.814108 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:22:56.814255 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:22:56.814277 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:22:56.814438 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:22:56.820849 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:22:56.820923 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:22:56.824522 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:22:56.824745 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:22:56.825373 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:22:56.825407 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:22:56.825549 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:22:56.825566 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:22:56.825674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:22:56.825697 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:22:56.825856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:22:56.825877 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:22:56.826014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:22:56.826036 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:22:56.829341 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:22:56.829452 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:22:56.829487 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:22:56.829614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:22:56.829636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:22:56.832696 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:22:56.832778 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:22:56.854282 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:22:56.854351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:22:56.854834 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:22:56.854951 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:22:56.854982 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:22:56.858409 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:22:56.869471 systemd[1]: Switching root. Feb 13 20:22:56.900881 systemd-journald[215]: Journal stopped Feb 13 20:22:59.288956 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Feb 13 20:22:59.288978 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:22:59.288987 kernel: SELinux: policy capability open_perms=1 Feb 13 20:22:59.288992 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:22:59.288998 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:22:59.289003 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:22:59.289010 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:22:59.289016 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:22:59.289022 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:22:59.289027 kernel: audit: type=1403 audit(1739478177.501:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:22:59.289034 systemd[1]: Successfully loaded SELinux policy in 30.618ms. Feb 13 20:22:59.289041 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.086ms. Feb 13 20:22:59.289048 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:22:59.289055 systemd[1]: Detected virtualization vmware. Feb 13 20:22:59.289062 systemd[1]: Detected architecture x86-64. Feb 13 20:22:59.289069 systemd[1]: Detected first boot. Feb 13 20:22:59.289075 systemd[1]: Initializing machine ID from random generator. Feb 13 20:22:59.289083 zram_generator::config[1065]: No configuration found. Feb 13 20:22:59.289090 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:22:59.289097 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 13 20:22:59.289104 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Feb 13 20:22:59.289111 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:22:59.289117 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:22:59.289124 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:22:59.289131 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:22:59.289139 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:22:59.289145 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:22:59.289152 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:22:59.289158 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:22:59.289165 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:22:59.289172 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:22:59.289179 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:22:59.289186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:22:59.289194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:22:59.290386 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:22:59.290395 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:22:59.290402 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:22:59.290409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:22:59.290416 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:22:59.290425 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:22:59.290433 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:22:59.290441 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:22:59.290448 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:22:59.290455 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:22:59.290462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:22:59.290469 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:22:59.290475 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:22:59.290484 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:22:59.290496 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:22:59.290508 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:22:59.290516 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:22:59.290523 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:22:59.290532 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:22:59.290539 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:22:59.290546 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:22:59.290553 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:22:59.290560 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:22:59.290567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:22:59.290574 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:22:59.290581 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:22:59.290589 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:22:59.290597 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:22:59.290604 systemd[1]: Reached target machines.target - Containers. Feb 13 20:22:59.290611 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:22:59.290618 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Feb 13 20:22:59.290625 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:22:59.290632 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:22:59.290639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:22:59.290647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:22:59.290655 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:22:59.290662 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:22:59.290669 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:22:59.290676 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:22:59.290683 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:22:59.290690 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:22:59.290696 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:22:59.290704 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:22:59.290712 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:22:59.290719 kernel: fuse: init (API version 7.39) Feb 13 20:22:59.290725 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:22:59.290732 kernel: loop: module loaded Feb 13 20:22:59.290738 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:22:59.290745 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:22:59.290752 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:22:59.290759 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:22:59.290766 systemd[1]: Stopped verity-setup.service. Feb 13 20:22:59.290774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:22:59.290781 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:22:59.290788 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:22:59.290795 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:22:59.290802 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:22:59.290809 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:22:59.290816 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:22:59.290823 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:22:59.290831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:22:59.290838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:22:59.290845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:22:59.290852 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:22:59.290859 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:22:59.290867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:22:59.290873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:22:59.290891 systemd-journald[1155]: Collecting audit messages is disabled. Feb 13 20:22:59.290908 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:22:59.290916 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:22:59.290922 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:22:59.290930 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:22:59.290936 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:22:59.290945 systemd-journald[1155]: Journal started Feb 13 20:22:59.290960 systemd-journald[1155]: Runtime Journal (/run/log/journal/ffba488aa6144a2ea028baf1396461c4) is 4.8M, max 38.6M, 33.8M free. Feb 13 20:22:59.110161 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:22:59.123287 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:22:59.293097 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:22:59.293115 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:22:59.123502 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:22:59.293001 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:22:59.293803 jq[1132]: true Feb 13 20:22:59.295372 jq[1174]: true Feb 13 20:22:59.306781 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:22:59.315337 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:22:59.317429 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:22:59.317547 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:22:59.317567 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:22:59.319214 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:22:59.323957 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:22:59.327281 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:22:59.327441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:22:59.331263 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:22:59.334283 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:22:59.334419 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:22:59.340073 kernel: ACPI: bus type drm_connector registered Feb 13 20:22:59.339301 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:22:59.339451 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:22:59.348333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:22:59.350285 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:22:59.353332 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:22:59.355185 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:22:59.355335 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:22:59.355660 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:22:59.356004 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:22:59.362432 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:22:59.371208 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 20:22:59.391269 systemd-journald[1155]: Time spent on flushing to /var/log/journal/ffba488aa6144a2ea028baf1396461c4 is 27.374ms for 1836 entries. Feb 13 20:22:59.391269 systemd-journald[1155]: System Journal (/var/log/journal/ffba488aa6144a2ea028baf1396461c4) is 8.0M, max 584.8M, 576.8M free. Feb 13 20:22:59.498309 systemd-journald[1155]: Received client request to flush runtime journal. Feb 13 20:22:59.498347 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:22:59.403616 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Feb 13 20:22:59.395559 ignition[1175]: Ignition 2.19.0 Feb 13 20:22:59.422406 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:22:59.395939 ignition[1175]: deleting config from guestinfo properties Feb 13 20:22:59.422664 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:22:59.399563 ignition[1175]: Successfully deleted config Feb 13 20:22:59.430176 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:22:59.452319 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:22:59.461292 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:22:59.466927 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:22:59.470789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:22:59.501495 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:22:59.504753 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:22:59.505110 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:22:59.515116 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:22:59.518217 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 20:22:59.524445 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:22:59.553167 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Feb 13 20:22:59.553178 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Feb 13 20:22:59.556462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:22:59.560240 kernel: loop2: detected capacity change from 0 to 140768 Feb 13 20:22:59.602215 kernel: loop3: detected capacity change from 0 to 2976 Feb 13 20:22:59.654287 kernel: loop4: detected capacity change from 0 to 142488 Feb 13 20:22:59.672283 kernel: loop5: detected capacity change from 0 to 205544 Feb 13 20:22:59.704281 kernel: loop6: detected capacity change from 0 to 140768 Feb 13 20:22:59.749262 kernel: loop7: detected capacity change from 0 to 2976 Feb 13 20:22:59.758726 (sd-merge)[1233]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Feb 13 20:22:59.759408 (sd-merge)[1233]: Merged extensions into '/usr'. Feb 13 20:22:59.765286 systemd[1]: Reloading requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:22:59.765388 systemd[1]: Reloading... Feb 13 20:22:59.817253 zram_generator::config[1256]: No configuration found. Feb 13 20:22:59.895022 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 13 20:22:59.910461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:22:59.948845 systemd[1]: Reloading finished in 182 ms. Feb 13 20:22:59.973640 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:22:59.984407 systemd[1]: Starting ensure-sysext.service... Feb 13 20:22:59.985412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:23:00.002841 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:23:00.003047 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:23:00.003555 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:23:00.003718 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Feb 13 20:23:00.003753 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Feb 13 20:23:00.005562 systemd[1]: Reloading requested from client PID 1314 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:23:00.005574 systemd[1]: Reloading... Feb 13 20:23:00.024842 systemd-tmpfiles[1315]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:23:00.024850 systemd-tmpfiles[1315]: Skipping /boot Feb 13 20:23:00.045296 systemd-tmpfiles[1315]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:23:00.045377 systemd-tmpfiles[1315]: Skipping /boot Feb 13 20:23:00.054334 zram_generator::config[1343]: No configuration found. Feb 13 20:23:00.134500 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 13 20:23:00.149373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:23:00.177379 systemd[1]: Reloading finished in 171 ms. Feb 13 20:23:00.186436 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:23:00.191040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:23:00.196314 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:23:00.201315 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:23:00.216347 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:23:00.219398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:23:00.225802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:23:00.229292 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:23:00.240442 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:23:00.241455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:23:00.245400 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:23:00.247167 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:23:00.250385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:23:00.250571 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:23:00.250651 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:23:00.254358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:23:00.254451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:23:00.254506 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:23:00.256556 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:23:00.257424 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Feb 13 20:23:00.262357 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:23:00.262552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:23:00.262642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:23:00.265859 systemd[1]: Finished ensure-sysext.service. Feb 13 20:23:00.274936 ldconfig[1197]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:23:00.277287 augenrules[1429]: No rules Feb 13 20:23:00.277343 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:23:00.277714 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:23:00.279396 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:23:00.285686 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:23:00.285988 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:23:00.286258 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:23:00.286346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:23:00.287754 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:23:00.289110 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:23:00.290826 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:23:00.290936 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:23:00.291283 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:23:00.291370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:23:00.292968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:23:00.293115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:23:00.293839 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:23:00.300843 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:23:00.301032 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:23:00.309382 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:23:00.318801 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:23:00.367224 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:23:00.380718 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:23:00.380946 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:23:00.386051 systemd-resolved[1411]: Positive Trust Anchors: Feb 13 20:23:00.386387 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:23:00.386444 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:23:00.396186 systemd-networkd[1451]: lo: Link UP Feb 13 20:23:00.396439 systemd-networkd[1451]: lo: Gained carrier Feb 13 20:23:00.397224 systemd-networkd[1451]: Enumeration completed Feb 13 20:23:00.397289 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:23:00.398684 systemd-networkd[1451]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Feb 13 20:23:00.401500 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 13 20:23:00.401690 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 13 20:23:00.402328 systemd-networkd[1451]: ens192: Link UP Feb 13 20:23:00.402503 systemd-networkd[1451]: ens192: Gained carrier Feb 13 20:23:00.403323 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:23:00.404828 systemd-resolved[1411]: Defaulting to hostname 'linux'. Feb 13 20:23:00.405978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:23:00.406245 systemd[1]: Reached target network.target - Network. Feb 13 20:23:00.406399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:23:00.410736 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Feb 13 20:23:00.442323 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:23:00.449293 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1466) Feb 13 20:23:00.453229 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:23:00.502223 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Feb 13 20:23:00.517978 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:23:00.518261 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:23:00.531695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Feb 13 20:23:00.540578 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:23:00.541037 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Feb 13 20:23:00.541169 kernel: Guest personality initialized and is active Feb 13 20:23:00.541349 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:23:00.542477 (udev-worker)[1458]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Feb 13 20:23:00.545376 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 20:23:00.545437 kernel: Initialized host personality Feb 13 20:23:00.552434 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:23:00.554305 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:23:00.564425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:23:00.571509 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:23:00.573802 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:23:00.591482 lvm[1493]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:23:00.612260 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:23:00.612514 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:23:00.619427 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:23:00.623237 lvm[1496]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:23:00.625616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:23:00.626135 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:23:00.626565 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:23:00.626708 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:23:00.626928 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:23:00.627087 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:23:00.627228 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:23:00.627342 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:23:00.627357 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:23:00.627446 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:23:00.628127 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:23:00.629656 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:23:00.633491 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:23:00.634061 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:23:00.634272 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:23:00.634365 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:23:00.634478 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:23:00.634499 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:23:00.635491 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:23:00.637308 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:23:00.640312 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:23:00.643136 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:23:00.643723 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:23:00.648741 jq[1503]: false Feb 13 20:23:00.647929 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:23:00.649350 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:23:00.651347 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:23:00.656835 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:23:00.660244 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:23:00.660592 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:23:00.661088 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:23:00.663362 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:23:00.665302 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:23:00.669312 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Feb 13 20:23:00.670757 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:23:00.673399 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:23:00.673521 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:23:00.679819 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:23:00.680611 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:23:00.684764 jq[1512]: true Feb 13 20:23:00.686745 extend-filesystems[1504]: Found loop4 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found loop5 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found loop6 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found loop7 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found sda Feb 13 20:23:00.687035 extend-filesystems[1504]: Found sda1 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found sda2 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found sda3 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found usr Feb 13 20:23:00.687035 extend-filesystems[1504]: Found sda4 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found sda6 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found sda7 Feb 13 20:23:00.687035 extend-filesystems[1504]: Found sda9 Feb 13 20:23:00.687035 extend-filesystems[1504]: Checking size of /dev/sda9 Feb 13 20:23:00.696318 jq[1524]: true Feb 13 20:23:00.698181 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Feb 13 20:23:00.701305 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Feb 13 20:23:00.717539 extend-filesystems[1504]: Old size kept for /dev/sda9 Feb 13 20:23:00.717539 extend-filesystems[1504]: Found sr0 Feb 13 20:23:00.724493 (ntainerd)[1529]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:23:00.724570 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:23:00.726789 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:23:00.730754 dbus-daemon[1502]: [system] SELinux support is enabled Feb 13 20:23:00.739831 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:23:00.740140 bash[1557]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:23:00.740201 tar[1517]: linux-amd64/helm Feb 13 20:23:00.740318 update_engine[1511]: I20250213 20:23:00.732998 1511 main.cc:92] Flatcar Update Engine starting Feb 13 20:23:00.740318 update_engine[1511]: I20250213 20:23:00.736546 1511 update_check_scheduler.cc:74] Next update check in 4m57s Feb 13 20:23:00.741909 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:23:00.743347 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:23:00.743646 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:23:00.745792 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:23:00.745849 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:23:00.745864 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:23:00.746807 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:23:00.746820 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:23:00.747428 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:23:00.756474 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:23:00.773274 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1461) Feb 13 20:23:00.781435 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Feb 13 20:23:00.822109 unknown[1533]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Feb 13 20:23:00.822887 systemd-logind[1510]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:23:00.823032 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:23:00.823160 systemd-logind[1510]: New seat seat0. Feb 13 20:23:00.823632 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:23:00.827864 unknown[1533]: Core dump limit set to -1 Feb 13 20:23:00.836550 kernel: NET: Registered PF_VSOCK protocol family Feb 13 20:23:00.950586 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:23:01.059262 containerd[1529]: time="2025-02-13T20:23:01.058496311Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:23:01.084025 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:23:01.092296 containerd[1529]: time="2025-02-13T20:23:01.092254013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:23:01.093985 containerd[1529]: time="2025-02-13T20:23:01.093953288Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:23:01.093985 containerd[1529]: time="2025-02-13T20:23:01.093978913Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:23:01.093985 containerd[1529]: time="2025-02-13T20:23:01.093990897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:23:01.094100 containerd[1529]: time="2025-02-13T20:23:01.094088253Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:23:01.094119 containerd[1529]: time="2025-02-13T20:23:01.094103680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094161 containerd[1529]: time="2025-02-13T20:23:01.094145102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094161 containerd[1529]: time="2025-02-13T20:23:01.094155471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094298 containerd[1529]: time="2025-02-13T20:23:01.094283049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094298 containerd[1529]: time="2025-02-13T20:23:01.094294617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094339 containerd[1529]: time="2025-02-13T20:23:01.094302581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094339 containerd[1529]: time="2025-02-13T20:23:01.094309159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094370 containerd[1529]: time="2025-02-13T20:23:01.094351832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094490 containerd[1529]: time="2025-02-13T20:23:01.094476596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094545 containerd[1529]: time="2025-02-13T20:23:01.094532821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:23:01.094567 containerd[1529]: time="2025-02-13T20:23:01.094544109Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:23:01.094602 containerd[1529]: time="2025-02-13T20:23:01.094590101Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:23:01.094638 containerd[1529]: time="2025-02-13T20:23:01.094626000Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:23:01.096517 containerd[1529]: time="2025-02-13T20:23:01.096486689Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:23:01.096584 containerd[1529]: time="2025-02-13T20:23:01.096535026Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:23:01.096584 containerd[1529]: time="2025-02-13T20:23:01.096547176Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:23:01.096584 containerd[1529]: time="2025-02-13T20:23:01.096556220Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:23:01.096584 containerd[1529]: time="2025-02-13T20:23:01.096566637Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:23:01.096663 containerd[1529]: time="2025-02-13T20:23:01.096651365Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:23:01.096809 containerd[1529]: time="2025-02-13T20:23:01.096790908Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:23:01.096870 containerd[1529]: time="2025-02-13T20:23:01.096858128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:23:01.096888 containerd[1529]: time="2025-02-13T20:23:01.096872213Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:23:01.096888 containerd[1529]: time="2025-02-13T20:23:01.096880020Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:23:01.096921 containerd[1529]: time="2025-02-13T20:23:01.096888072Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:23:01.096921 containerd[1529]: time="2025-02-13T20:23:01.096895853Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:23:01.096921 containerd[1529]: time="2025-02-13T20:23:01.096903446Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:23:01.096921 containerd[1529]: time="2025-02-13T20:23:01.096914281Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:23:01.096976 containerd[1529]: time="2025-02-13T20:23:01.096924163Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:23:01.096976 containerd[1529]: time="2025-02-13T20:23:01.096931559Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:23:01.096976 containerd[1529]: time="2025-02-13T20:23:01.096939003Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:23:01.096976 containerd[1529]: time="2025-02-13T20:23:01.096945988Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:23:01.096976 containerd[1529]: time="2025-02-13T20:23:01.096957714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.096976 containerd[1529]: time="2025-02-13T20:23:01.096966451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.096976 containerd[1529]: time="2025-02-13T20:23:01.096973811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.096984821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.096992836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.097000812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.097007789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.097020089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.097027868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.097037061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.097044090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.097050850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.097058109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097072 containerd[1529]: time="2025-02-13T20:23:01.097068034Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097081643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097088851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097097837Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097123400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097135382Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097141978Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097148515Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097154111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097161323Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097167120Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:23:01.097241 containerd[1529]: time="2025-02-13T20:23:01.097173181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:23:01.099248 containerd[1529]: time="2025-02-13T20:23:01.098441226Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:23:01.099248 containerd[1529]: time="2025-02-13T20:23:01.098494153Z" level=info msg="Connect containerd service" Feb 13 20:23:01.099248 containerd[1529]: time="2025-02-13T20:23:01.098521123Z" level=info msg="using legacy CRI server" Feb 13 20:23:01.099248 containerd[1529]: time="2025-02-13T20:23:01.098526048Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:23:01.099248 containerd[1529]: time="2025-02-13T20:23:01.098584988Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:23:01.101605 containerd[1529]: time="2025-02-13T20:23:01.100458777Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:23:01.101605 containerd[1529]: time="2025-02-13T20:23:01.100565630Z" level=info msg="Start subscribing containerd event" Feb 13 20:23:01.101605 containerd[1529]: time="2025-02-13T20:23:01.100606077Z" level=info msg="Start recovering state" Feb 13 20:23:01.101605 containerd[1529]: time="2025-02-13T20:23:01.100649298Z" level=info msg="Start event monitor" Feb 13 20:23:01.101605 containerd[1529]: time="2025-02-13T20:23:01.100662164Z" level=info msg="Start snapshots syncer" Feb 13 20:23:01.101605 containerd[1529]: time="2025-02-13T20:23:01.100667717Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:23:01.101605 containerd[1529]: time="2025-02-13T20:23:01.100673142Z" level=info msg="Start streaming server" Feb 13 20:23:01.101972 containerd[1529]: time="2025-02-13T20:23:01.101765805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:23:01.101972 containerd[1529]: time="2025-02-13T20:23:01.101799671Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:23:01.109684 containerd[1529]: time="2025-02-13T20:23:01.109285438Z" level=info msg="containerd successfully booted in 0.052263s" Feb 13 20:23:01.109383 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:23:01.118121 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:23:01.124465 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:23:01.129384 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:23:01.129524 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:23:01.134522 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:23:01.141676 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:23:01.147581 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:23:01.150437 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:23:01.151849 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:23:01.258723 tar[1517]: linux-amd64/LICENSE Feb 13 20:23:01.258819 tar[1517]: linux-amd64/README.md Feb 13 20:23:01.271113 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:23:02.227418 systemd-networkd[1451]: ens192: Gained IPv6LL Feb 13 20:23:02.227695 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Feb 13 20:23:02.228705 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:23:02.229585 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:23:02.233608 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Feb 13 20:23:02.235899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:23:02.238710 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:23:02.252396 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:23:02.262390 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:23:02.262493 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Feb 13 20:23:02.263042 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:23:03.095913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:23:03.096380 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:23:03.096919 systemd[1]: Startup finished in 988ms (kernel) + 4.875s (initrd) + 5.624s (userspace) = 11.488s. Feb 13 20:23:03.102764 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:23:03.138424 login[1650]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:23:03.138840 login[1651]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:23:03.146517 systemd-logind[1510]: New session 2 of user core. Feb 13 20:23:03.147006 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:23:03.152384 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:23:03.154905 systemd-logind[1510]: New session 1 of user core. Feb 13 20:23:03.160891 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:23:03.166363 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:23:03.168106 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:23:03.231256 systemd[1692]: Queued start job for default target default.target. Feb 13 20:23:03.243096 systemd[1692]: Created slice app.slice - User Application Slice. Feb 13 20:23:03.243116 systemd[1692]: Reached target paths.target - Paths. Feb 13 20:23:03.243125 systemd[1692]: Reached target timers.target - Timers. Feb 13 20:23:03.246277 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:23:03.251001 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:23:03.251458 systemd[1692]: Reached target sockets.target - Sockets. Feb 13 20:23:03.251471 systemd[1692]: Reached target basic.target - Basic System. Feb 13 20:23:03.251522 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:23:03.252382 systemd[1692]: Reached target default.target - Main User Target. Feb 13 20:23:03.252427 systemd[1692]: Startup finished in 80ms. Feb 13 20:23:03.256296 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:23:03.257594 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:23:03.830629 kubelet[1685]: E0213 20:23:03.830562 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:23:03.831864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:23:03.831950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:23:14.057013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:23:14.062331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:23:14.122122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:23:14.124507 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:23:14.188176 kubelet[1736]: E0213 20:23:14.188144 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:23:14.190456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:23:14.190537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:23:24.307023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:23:24.313400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:23:24.634822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:23:24.638132 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:23:24.669133 kubelet[1751]: E0213 20:23:24.669100 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:23:24.670227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:23:24.670304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:24:47.148292 systemd-resolved[1411]: Clock change detected. Flushing caches. Feb 13 20:24:47.148318 systemd-timesyncd[1427]: Contacted time server 12.167.151.1:123 (2.flatcar.pool.ntp.org). Feb 13 20:24:47.148359 systemd-timesyncd[1427]: Initial clock synchronization to Thu 2025-02-13 20:24:47.148182 UTC. Feb 13 20:24:49.344421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:24:49.349717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:24:49.689470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:24:49.691898 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:24:49.730383 kubelet[1765]: E0213 20:24:49.730344 1765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:24:49.731638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:24:49.731743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:24:55.472101 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:24:55.472988 systemd[1]: Started sshd@0-139.178.70.108:22-139.178.89.65:47892.service - OpenSSH per-connection server daemon (139.178.89.65:47892). Feb 13 20:24:55.506390 sshd[1773]: Accepted publickey for core from 139.178.89.65 port 47892 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:24:55.507077 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:55.509344 systemd-logind[1510]: New session 3 of user core. Feb 13 20:24:55.519697 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:24:55.577899 systemd[1]: Started sshd@1-139.178.70.108:22-139.178.89.65:47900.service - OpenSSH per-connection server daemon (139.178.89.65:47900). Feb 13 20:24:55.602571 sshd[1778]: Accepted publickey for core from 139.178.89.65 port 47900 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:24:55.603207 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:55.605379 systemd-logind[1510]: New session 4 of user core. Feb 13 20:24:55.609692 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:24:55.657044 sshd[1778]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:55.661987 systemd[1]: sshd@1-139.178.70.108:22-139.178.89.65:47900.service: Deactivated successfully. Feb 13 20:24:55.662818 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:24:55.663573 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:24:55.664511 systemd[1]: Started sshd@2-139.178.70.108:22-139.178.89.65:47902.service - OpenSSH per-connection server daemon (139.178.89.65:47902). Feb 13 20:24:55.666041 systemd-logind[1510]: Removed session 4. Feb 13 20:24:55.690948 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 47902 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:24:55.691592 sshd[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:55.693646 systemd-logind[1510]: New session 5 of user core. Feb 13 20:24:55.700751 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:24:55.746784 sshd[1785]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:55.755372 systemd[1]: sshd@2-139.178.70.108:22-139.178.89.65:47902.service: Deactivated successfully. Feb 13 20:24:55.756550 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:24:55.757526 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:24:55.762840 systemd[1]: Started sshd@3-139.178.70.108:22-139.178.89.65:47908.service - OpenSSH per-connection server daemon (139.178.89.65:47908). Feb 13 20:24:55.763230 systemd-logind[1510]: Removed session 5. Feb 13 20:24:55.790416 sshd[1792]: Accepted publickey for core from 139.178.89.65 port 47908 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:24:55.791301 sshd[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:55.794214 systemd-logind[1510]: New session 6 of user core. Feb 13 20:24:55.801719 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:24:55.850533 sshd[1792]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:55.860105 systemd[1]: sshd@3-139.178.70.108:22-139.178.89.65:47908.service: Deactivated successfully. Feb 13 20:24:55.861044 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:24:55.861976 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:24:55.866811 systemd[1]: Started sshd@4-139.178.70.108:22-139.178.89.65:47922.service - OpenSSH per-connection server daemon (139.178.89.65:47922). Feb 13 20:24:55.869908 systemd-logind[1510]: Removed session 6. Feb 13 20:24:55.892058 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 47922 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:24:55.892711 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:55.895938 systemd-logind[1510]: New session 7 of user core. Feb 13 20:24:55.903753 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:24:55.961378 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:24:55.961586 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:24:55.974263 sudo[1802]: pam_unix(sudo:session): session closed for user root Feb 13 20:24:55.975310 sshd[1799]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:55.990225 systemd[1]: sshd@4-139.178.70.108:22-139.178.89.65:47922.service: Deactivated successfully. Feb 13 20:24:55.991417 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:24:55.991959 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:24:55.993292 systemd[1]: Started sshd@5-139.178.70.108:22-139.178.89.65:47934.service - OpenSSH per-connection server daemon (139.178.89.65:47934). Feb 13 20:24:55.994845 systemd-logind[1510]: Removed session 7. Feb 13 20:24:56.021073 sshd[1807]: Accepted publickey for core from 139.178.89.65 port 47934 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:24:56.021973 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:56.024119 systemd-logind[1510]: New session 8 of user core. Feb 13 20:24:56.033740 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:24:56.082327 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:24:56.082708 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:24:56.084870 sudo[1811]: pam_unix(sudo:session): session closed for user root Feb 13 20:24:56.088057 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:24:56.088218 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:24:56.098823 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:24:56.099635 auditctl[1814]: No rules Feb 13 20:24:56.100124 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:24:56.100416 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:24:56.101798 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:24:56.118309 augenrules[1832]: No rules Feb 13 20:24:56.118982 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:24:56.119725 sudo[1810]: pam_unix(sudo:session): session closed for user root Feb 13 20:24:56.121233 sshd[1807]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:56.123892 systemd[1]: sshd@5-139.178.70.108:22-139.178.89.65:47934.service: Deactivated successfully. Feb 13 20:24:56.124728 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:24:56.125184 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:24:56.126284 systemd[1]: Started sshd@6-139.178.70.108:22-139.178.89.65:47940.service - OpenSSH per-connection server daemon (139.178.89.65:47940). Feb 13 20:24:56.127818 systemd-logind[1510]: Removed session 8. Feb 13 20:24:56.153840 sshd[1840]: Accepted publickey for core from 139.178.89.65 port 47940 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:24:56.154513 sshd[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:56.157826 systemd-logind[1510]: New session 9 of user core. Feb 13 20:24:56.163698 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:24:56.212114 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:24:56.212323 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:24:56.483755 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:24:56.483847 (dockerd)[1859]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:24:56.742610 dockerd[1859]: time="2025-02-13T20:24:56.742494072Z" level=info msg="Starting up" Feb 13 20:24:56.850989 dockerd[1859]: time="2025-02-13T20:24:56.850889857Z" level=info msg="Loading containers: start." Feb 13 20:24:56.941676 kernel: Initializing XFRM netlink socket Feb 13 20:24:57.019740 systemd-networkd[1451]: docker0: Link UP Feb 13 20:24:57.031515 dockerd[1859]: time="2025-02-13T20:24:57.031486031Z" level=info msg="Loading containers: done." Feb 13 20:24:57.042381 dockerd[1859]: time="2025-02-13T20:24:57.042022718Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:24:57.042381 dockerd[1859]: time="2025-02-13T20:24:57.042099662Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:24:57.042381 dockerd[1859]: time="2025-02-13T20:24:57.042176929Z" level=info msg="Daemon has completed initialization" Feb 13 20:24:57.042499 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3425416510-merged.mount: Deactivated successfully. Feb 13 20:24:57.055511 dockerd[1859]: time="2025-02-13T20:24:57.055384645Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:24:57.055823 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:24:57.824398 containerd[1529]: time="2025-02-13T20:24:57.824249375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 20:24:58.428422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447386368.mount: Deactivated successfully. Feb 13 20:24:59.417451 containerd[1529]: time="2025-02-13T20:24:59.417411141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:24:59.418243 containerd[1529]: time="2025-02-13T20:24:59.418006804Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 20:24:59.418243 containerd[1529]: time="2025-02-13T20:24:59.418113463Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:24:59.419662 containerd[1529]: time="2025-02-13T20:24:59.419632687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:24:59.420311 containerd[1529]: time="2025-02-13T20:24:59.420223078Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 1.59595126s" Feb 13 20:24:59.420311 containerd[1529]: time="2025-02-13T20:24:59.420242138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 20:24:59.421823 containerd[1529]: time="2025-02-13T20:24:59.421787653Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 20:24:59.844306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 20:24:59.851752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:24:59.909725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:24:59.911875 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:24:59.932183 kubelet[2059]: E0213 20:24:59.932117 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:24:59.933557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:24:59.933651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:25:00.148718 update_engine[1511]: I20250213 20:25:00.148636 1511 update_attempter.cc:509] Updating boot flags... Feb 13 20:25:00.207631 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2075) Feb 13 20:25:00.325638 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2078) Feb 13 20:25:01.316978 containerd[1529]: time="2025-02-13T20:25:01.316944175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:01.324914 containerd[1529]: time="2025-02-13T20:25:01.324869480Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 20:25:01.326723 containerd[1529]: time="2025-02-13T20:25:01.326694795Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:01.333719 containerd[1529]: time="2025-02-13T20:25:01.333661600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:01.334333 containerd[1529]: time="2025-02-13T20:25:01.334156260Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.912351703s" Feb 13 20:25:01.334333 containerd[1529]: time="2025-02-13T20:25:01.334177175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 20:25:01.334507 containerd[1529]: time="2025-02-13T20:25:01.334402864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 20:25:03.412647 containerd[1529]: time="2025-02-13T20:25:03.412608857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:03.413478 containerd[1529]: time="2025-02-13T20:25:03.413454138Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 20:25:03.413894 containerd[1529]: time="2025-02-13T20:25:03.413879070Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:03.415278 containerd[1529]: time="2025-02-13T20:25:03.415255718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:03.415981 containerd[1529]: time="2025-02-13T20:25:03.415880354Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 2.081465033s" Feb 13 20:25:03.415981 containerd[1529]: time="2025-02-13T20:25:03.415905874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 20:25:03.416218 containerd[1529]: time="2025-02-13T20:25:03.416206437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:25:04.281330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936451458.mount: Deactivated successfully. Feb 13 20:25:04.622952 containerd[1529]: time="2025-02-13T20:25:04.622920942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:04.628313 containerd[1529]: time="2025-02-13T20:25:04.628284238Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 20:25:04.633660 containerd[1529]: time="2025-02-13T20:25:04.633318870Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:04.647836 containerd[1529]: time="2025-02-13T20:25:04.647802603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:04.648283 containerd[1529]: time="2025-02-13T20:25:04.648055821Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.231834018s" Feb 13 20:25:04.648283 containerd[1529]: time="2025-02-13T20:25:04.648073694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 20:25:04.648430 containerd[1529]: time="2025-02-13T20:25:04.648418946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:25:05.686834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312183190.mount: Deactivated successfully. Feb 13 20:25:07.229553 containerd[1529]: time="2025-02-13T20:25:07.229491514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:07.230284 containerd[1529]: time="2025-02-13T20:25:07.230173692Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:25:07.230908 containerd[1529]: time="2025-02-13T20:25:07.230666336Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:07.233636 containerd[1529]: time="2025-02-13T20:25:07.232998311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:07.236347 containerd[1529]: time="2025-02-13T20:25:07.236310078Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.587838834s" Feb 13 20:25:07.236426 containerd[1529]: time="2025-02-13T20:25:07.236354612Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:25:07.237005 containerd[1529]: time="2025-02-13T20:25:07.236989466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:25:07.788876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125196398.mount: Deactivated successfully. Feb 13 20:25:07.790873 containerd[1529]: time="2025-02-13T20:25:07.790847020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:07.791558 containerd[1529]: time="2025-02-13T20:25:07.791525173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 20:25:07.791845 containerd[1529]: time="2025-02-13T20:25:07.791825631Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:07.793448 containerd[1529]: time="2025-02-13T20:25:07.793428719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:07.793919 containerd[1529]: time="2025-02-13T20:25:07.793901075Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 556.82392ms" Feb 13 20:25:07.793961 containerd[1529]: time="2025-02-13T20:25:07.793921530Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:25:07.794651 containerd[1529]: time="2025-02-13T20:25:07.794429359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 20:25:08.685448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647406417.mount: Deactivated successfully. Feb 13 20:25:10.094246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 20:25:10.105720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:11.866847 containerd[1529]: time="2025-02-13T20:25:11.866806158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:11.882931 containerd[1529]: time="2025-02-13T20:25:11.882866157Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 20:25:11.898931 containerd[1529]: time="2025-02-13T20:25:11.897948101Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:11.912293 containerd[1529]: time="2025-02-13T20:25:11.912264744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:11.913270 containerd[1529]: time="2025-02-13T20:25:11.912907044Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.118456043s" Feb 13 20:25:11.913346 containerd[1529]: time="2025-02-13T20:25:11.913335467Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 20:25:12.049096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:12.052054 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:25:12.188855 kubelet[2217]: E0213 20:25:12.188749 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:25:12.189769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:25:12.189851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:25:14.166423 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:14.170778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:14.189406 systemd[1]: Reloading requested from client PID 2243 ('systemctl') (unit session-9.scope)... Feb 13 20:25:14.189511 systemd[1]: Reloading... Feb 13 20:25:14.259688 zram_generator::config[2280]: No configuration found. Feb 13 20:25:14.319089 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 13 20:25:14.334249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:25:14.377183 systemd[1]: Reloading finished in 187 ms. Feb 13 20:25:14.405244 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:25:14.405287 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:25:14.405434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:14.406864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:14.873655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:14.876529 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:25:14.906976 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:25:14.906976 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:25:14.906976 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:25:14.906976 kubelet[2351]: I0213 20:25:14.906423 2351 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:25:15.478008 kubelet[2351]: I0213 20:25:15.477977 2351 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:25:15.478008 kubelet[2351]: I0213 20:25:15.478009 2351 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:25:15.478194 kubelet[2351]: I0213 20:25:15.478183 2351 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:25:15.788230 kubelet[2351]: I0213 20:25:15.788158 2351 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:25:15.796682 kubelet[2351]: E0213 20:25:15.796658 2351 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:15.875307 kubelet[2351]: E0213 20:25:15.875273 2351 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:25:15.875307 kubelet[2351]: I0213 20:25:15.875303 2351 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:25:15.896167 kubelet[2351]: I0213 20:25:15.896142 2351 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:25:15.904065 kubelet[2351]: I0213 20:25:15.904042 2351 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:25:15.904193 kubelet[2351]: I0213 20:25:15.904167 2351 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:25:15.904346 kubelet[2351]: I0213 20:25:15.904196 2351 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:25:15.904433 kubelet[2351]: I0213 20:25:15.904354 2351 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:25:15.904433 kubelet[2351]: I0213 20:25:15.904362 2351 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:25:15.904477 kubelet[2351]: I0213 20:25:15.904437 2351 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:25:15.931937 kubelet[2351]: I0213 20:25:15.931828 2351 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:25:15.931937 kubelet[2351]: I0213 20:25:15.931851 2351 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:25:15.939510 kubelet[2351]: I0213 20:25:15.939418 2351 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:25:15.939510 kubelet[2351]: I0213 20:25:15.939439 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:25:15.961004 kubelet[2351]: W0213 20:25:15.960420 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Feb 13 20:25:15.961004 kubelet[2351]: E0213 20:25:15.960458 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:15.980145 kubelet[2351]: W0213 20:25:15.980035 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Feb 13 20:25:15.980145 kubelet[2351]: E0213 20:25:15.980076 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:15.980570 kubelet[2351]: I0213 20:25:15.980466 2351 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:25:16.004230 kubelet[2351]: I0213 20:25:16.004105 2351 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:25:16.019177 kubelet[2351]: W0213 20:25:16.018995 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:25:16.019597 kubelet[2351]: I0213 20:25:16.019406 2351 server.go:1269] "Started kubelet" Feb 13 20:25:16.019597 kubelet[2351]: I0213 20:25:16.019523 2351 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:25:16.035643 kubelet[2351]: I0213 20:25:16.035007 2351 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:25:16.037365 kubelet[2351]: I0213 20:25:16.037137 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:25:16.037365 kubelet[2351]: I0213 20:25:16.037302 2351 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:25:16.040922 kubelet[2351]: I0213 20:25:16.040881 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:25:16.047358 kubelet[2351]: I0213 20:25:16.047335 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:25:16.048929 kubelet[2351]: I0213 20:25:16.048916 2351 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:25:16.049079 kubelet[2351]: E0213 20:25:16.049065 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:25:16.057646 kubelet[2351]: I0213 20:25:16.056561 2351 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:25:16.057646 kubelet[2351]: I0213 20:25:16.056609 2351 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:25:16.057646 kubelet[2351]: E0213 20:25:16.044817 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823de4ff0d495eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:25:16.019389931 +0000 UTC m=+1.140329338,LastTimestamp:2025-02-13 20:25:16.019389931 +0000 UTC m=+1.140329338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:25:16.057646 kubelet[2351]: E0213 20:25:16.057230 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" Feb 13 20:25:16.057646 kubelet[2351]: W0213 20:25:16.057598 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Feb 13 20:25:16.057646 kubelet[2351]: E0213 20:25:16.057625 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:16.059638 kubelet[2351]: I0213 20:25:16.059373 2351 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:25:16.059638 kubelet[2351]: I0213 20:25:16.059416 2351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:25:16.068661 kubelet[2351]: I0213 20:25:16.068649 2351 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:25:16.070284 kubelet[2351]: E0213 20:25:16.070268 2351 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:25:16.093579 kubelet[2351]: I0213 20:25:16.093548 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:25:16.094347 kubelet[2351]: I0213 20:25:16.094336 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:25:16.094372 kubelet[2351]: I0213 20:25:16.094356 2351 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:25:16.094389 kubelet[2351]: I0213 20:25:16.094378 2351 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:25:16.094425 kubelet[2351]: E0213 20:25:16.094411 2351 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:25:16.101404 kubelet[2351]: W0213 20:25:16.100700 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Feb 13 20:25:16.101404 kubelet[2351]: E0213 20:25:16.100738 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:16.115508 kubelet[2351]: I0213 20:25:16.115489 2351 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:25:16.115508 kubelet[2351]: I0213 20:25:16.115501 2351 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:25:16.115508 kubelet[2351]: I0213 20:25:16.115512 2351 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:25:16.128318 kubelet[2351]: I0213 20:25:16.128293 2351 policy_none.go:49] "None policy: Start" Feb 13 20:25:16.128983 kubelet[2351]: I0213 20:25:16.128776 2351 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:25:16.128983 kubelet[2351]: I0213 20:25:16.128791 2351 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:25:16.149803 kubelet[2351]: E0213 20:25:16.149782 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:25:16.159187 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:25:16.168015 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:25:16.170091 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:25:16.186209 kubelet[2351]: I0213 20:25:16.186195 2351 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:25:16.186441 kubelet[2351]: I0213 20:25:16.186303 2351 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:25:16.186441 kubelet[2351]: I0213 20:25:16.186311 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:25:16.186441 kubelet[2351]: I0213 20:25:16.186440 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:25:16.187962 kubelet[2351]: E0213 20:25:16.187944 2351 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:25:16.214858 systemd[1]: Created slice kubepods-burstable-pod77a1a924982be3ed34b867b542c7ff75.slice - libcontainer container kubepods-burstable-pod77a1a924982be3ed34b867b542c7ff75.slice. Feb 13 20:25:16.228885 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 20:25:16.242026 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 20:25:16.258384 kubelet[2351]: E0213 20:25:16.258357 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" Feb 13 20:25:16.287905 kubelet[2351]: I0213 20:25:16.287843 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:25:16.288070 kubelet[2351]: E0213 20:25:16.288053 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Feb 13 20:25:16.293357 kubelet[2351]: E0213 20:25:16.293279 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823de4ff0d495eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:25:16.019389931 +0000 UTC m=+1.140329338,LastTimestamp:2025-02-13 20:25:16.019389931 +0000 UTC m=+1.140329338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:25:16.357768 kubelet[2351]: I0213 20:25:16.357710 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77a1a924982be3ed34b867b542c7ff75-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77a1a924982be3ed34b867b542c7ff75\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:25:16.357768 kubelet[2351]: I0213 20:25:16.357733 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:16.357768 kubelet[2351]: I0213 20:25:16.357743 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:16.357768 kubelet[2351]: I0213 20:25:16.357752 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77a1a924982be3ed34b867b542c7ff75-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77a1a924982be3ed34b867b542c7ff75\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:25:16.357768 kubelet[2351]: I0213 20:25:16.357762 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77a1a924982be3ed34b867b542c7ff75-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77a1a924982be3ed34b867b542c7ff75\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:25:16.357899 kubelet[2351]: I0213 20:25:16.357772 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:16.357899 kubelet[2351]: I0213 20:25:16.357781 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:16.357899 kubelet[2351]: I0213 20:25:16.357790 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:16.357899 kubelet[2351]: I0213 20:25:16.357800 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:25:16.489878 kubelet[2351]: I0213 20:25:16.489852 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:25:16.490160 kubelet[2351]: E0213 20:25:16.490131 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Feb 13 20:25:16.528152 containerd[1529]: time="2025-02-13T20:25:16.527995901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77a1a924982be3ed34b867b542c7ff75,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:16.552689 containerd[1529]: time="2025-02-13T20:25:16.552610421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:16.552758 containerd[1529]: time="2025-02-13T20:25:16.552610442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:16.659345 kubelet[2351]: E0213 20:25:16.659274 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" Feb 13 20:25:16.891220 kubelet[2351]: I0213 20:25:16.891193 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:25:16.891362 kubelet[2351]: E0213 20:25:16.891347 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Feb 13 20:25:16.971122 kubelet[2351]: W0213 20:25:16.971083 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Feb 13 20:25:16.971122 kubelet[2351]: E0213 20:25:16.971125 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:17.027255 kubelet[2351]: W0213 20:25:17.027212 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Feb 13 20:25:17.027255 kubelet[2351]: E0213 20:25:17.027259 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:17.451084 kubelet[2351]: W0213 20:25:17.451051 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Feb 13 20:25:17.451210 kubelet[2351]: E0213 20:25:17.451096 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:17.459543 kubelet[2351]: E0213 20:25:17.459514 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" Feb 13 20:25:17.485343 kubelet[2351]: W0213 20:25:17.485281 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Feb 13 20:25:17.485343 kubelet[2351]: E0213 20:25:17.485324 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:17.692364 kubelet[2351]: I0213 20:25:17.692324 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:25:17.692677 kubelet[2351]: E0213 20:25:17.692661 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Feb 13 20:25:17.734951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount409816102.mount: Deactivated successfully. Feb 13 20:25:17.737643 containerd[1529]: time="2025-02-13T20:25:17.737551205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:17.738289 containerd[1529]: time="2025-02-13T20:25:17.738264595Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:25:17.738758 containerd[1529]: time="2025-02-13T20:25:17.738737589Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:17.740398 containerd[1529]: time="2025-02-13T20:25:17.740378331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:25:17.740442 containerd[1529]: time="2025-02-13T20:25:17.740428700Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:17.741612 containerd[1529]: time="2025-02-13T20:25:17.741594850Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:17.741978 containerd[1529]: time="2025-02-13T20:25:17.741867025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:25:17.742159 containerd[1529]: time="2025-02-13T20:25:17.742146301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:25:17.743249 containerd[1529]: time="2025-02-13T20:25:17.743233191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.190556064s" Feb 13 20:25:17.744807 containerd[1529]: time="2025-02-13T20:25:17.744754504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.192003957s" Feb 13 20:25:17.745870 containerd[1529]: time="2025-02-13T20:25:17.745740746Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.217695773s" Feb 13 20:25:17.868737 containerd[1529]: time="2025-02-13T20:25:17.868661077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:17.868875 containerd[1529]: time="2025-02-13T20:25:17.868843862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:17.868993 containerd[1529]: time="2025-02-13T20:25:17.868866282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:17.868993 containerd[1529]: time="2025-02-13T20:25:17.868966612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:17.871508 containerd[1529]: time="2025-02-13T20:25:17.871359838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:17.871508 containerd[1529]: time="2025-02-13T20:25:17.871413286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:17.871508 containerd[1529]: time="2025-02-13T20:25:17.871433266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:17.872189 kubelet[2351]: E0213 20:25:17.872160 2351 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:17.873474 containerd[1529]: time="2025-02-13T20:25:17.871723427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:17.874156 containerd[1529]: time="2025-02-13T20:25:17.872830438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:17.874241 containerd[1529]: time="2025-02-13T20:25:17.874148964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:17.874241 containerd[1529]: time="2025-02-13T20:25:17.874225232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:17.874372 containerd[1529]: time="2025-02-13T20:25:17.874342758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:17.888984 systemd[1]: Started cri-containerd-ed4320290bf6aa825b647a832b9ae74e07373aff4d3c2749ee9401dd99fd8410.scope - libcontainer container ed4320290bf6aa825b647a832b9ae74e07373aff4d3c2749ee9401dd99fd8410. Feb 13 20:25:17.899729 systemd[1]: Started cri-containerd-bfab2cfe5dffaa65a7a8aec7b6b35ae68abf37196ff8605916855dd624062e2c.scope - libcontainer container bfab2cfe5dffaa65a7a8aec7b6b35ae68abf37196ff8605916855dd624062e2c. Feb 13 20:25:17.901180 systemd[1]: Started cri-containerd-f0d638c03c367f7ece865544c29d9681ca6f3559521e6af56e10c9706c60e75b.scope - libcontainer container f0d638c03c367f7ece865544c29d9681ca6f3559521e6af56e10c9706c60e75b. Feb 13 20:25:17.941074 containerd[1529]: time="2025-02-13T20:25:17.940961439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed4320290bf6aa825b647a832b9ae74e07373aff4d3c2749ee9401dd99fd8410\"" Feb 13 20:25:17.943106 containerd[1529]: time="2025-02-13T20:25:17.943058838Z" level=info msg="CreateContainer within sandbox \"ed4320290bf6aa825b647a832b9ae74e07373aff4d3c2749ee9401dd99fd8410\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:25:17.946060 containerd[1529]: time="2025-02-13T20:25:17.946009672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77a1a924982be3ed34b867b542c7ff75,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfab2cfe5dffaa65a7a8aec7b6b35ae68abf37196ff8605916855dd624062e2c\"" Feb 13 20:25:17.949017 containerd[1529]: time="2025-02-13T20:25:17.948728543Z" level=info msg="CreateContainer within sandbox \"bfab2cfe5dffaa65a7a8aec7b6b35ae68abf37196ff8605916855dd624062e2c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:25:17.954063 containerd[1529]: time="2025-02-13T20:25:17.954033512Z" level=info msg="CreateContainer within sandbox \"ed4320290bf6aa825b647a832b9ae74e07373aff4d3c2749ee9401dd99fd8410\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"627848fb52d455b041c9de8a3f22adff704b27ca520ace506ed876702c2c60d8\"" Feb 13 20:25:17.954681 containerd[1529]: time="2025-02-13T20:25:17.954669312Z" level=info msg="StartContainer for \"627848fb52d455b041c9de8a3f22adff704b27ca520ace506ed876702c2c60d8\"" Feb 13 20:25:17.959435 containerd[1529]: time="2025-02-13T20:25:17.959403346Z" level=info msg="CreateContainer within sandbox \"bfab2cfe5dffaa65a7a8aec7b6b35ae68abf37196ff8605916855dd624062e2c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"769378302210d47362b7bca2afa458a250cc75bfe5ee46be0a84bf98929f8d62\"" Feb 13 20:25:17.960010 containerd[1529]: time="2025-02-13T20:25:17.959997607Z" level=info msg="StartContainer for \"769378302210d47362b7bca2afa458a250cc75bfe5ee46be0a84bf98929f8d62\"" Feb 13 20:25:17.966943 containerd[1529]: time="2025-02-13T20:25:17.966816927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0d638c03c367f7ece865544c29d9681ca6f3559521e6af56e10c9706c60e75b\"" Feb 13 20:25:17.968751 containerd[1529]: time="2025-02-13T20:25:17.968675750Z" level=info msg="CreateContainer within sandbox \"f0d638c03c367f7ece865544c29d9681ca6f3559521e6af56e10c9706c60e75b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:25:17.978799 containerd[1529]: time="2025-02-13T20:25:17.978728484Z" level=info msg="CreateContainer within sandbox \"f0d638c03c367f7ece865544c29d9681ca6f3559521e6af56e10c9706c60e75b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b703c84d0929931ce0db21788a87637c9847be48ed301863477ff72e7e944cd7\"" Feb 13 20:25:17.979182 containerd[1529]: time="2025-02-13T20:25:17.979081639Z" level=info msg="StartContainer for \"b703c84d0929931ce0db21788a87637c9847be48ed301863477ff72e7e944cd7\"" Feb 13 20:25:17.984775 systemd[1]: Started cri-containerd-769378302210d47362b7bca2afa458a250cc75bfe5ee46be0a84bf98929f8d62.scope - libcontainer container 769378302210d47362b7bca2afa458a250cc75bfe5ee46be0a84bf98929f8d62. Feb 13 20:25:17.988310 systemd[1]: Started cri-containerd-627848fb52d455b041c9de8a3f22adff704b27ca520ace506ed876702c2c60d8.scope - libcontainer container 627848fb52d455b041c9de8a3f22adff704b27ca520ace506ed876702c2c60d8. Feb 13 20:25:18.007754 systemd[1]: Started cri-containerd-b703c84d0929931ce0db21788a87637c9847be48ed301863477ff72e7e944cd7.scope - libcontainer container b703c84d0929931ce0db21788a87637c9847be48ed301863477ff72e7e944cd7. Feb 13 20:25:18.054655 containerd[1529]: time="2025-02-13T20:25:18.052891725Z" level=info msg="StartContainer for \"627848fb52d455b041c9de8a3f22adff704b27ca520ace506ed876702c2c60d8\" returns successfully" Feb 13 20:25:18.054655 containerd[1529]: time="2025-02-13T20:25:18.052969899Z" level=info msg="StartContainer for \"769378302210d47362b7bca2afa458a250cc75bfe5ee46be0a84bf98929f8d62\" returns successfully" Feb 13 20:25:18.062806 containerd[1529]: time="2025-02-13T20:25:18.062731094Z" level=info msg="StartContainer for \"b703c84d0929931ce0db21788a87637c9847be48ed301863477ff72e7e944cd7\" returns successfully" Feb 13 20:25:19.016687 kubelet[2351]: W0213 20:25:19.016654 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Feb 13 20:25:19.016687 kubelet[2351]: E0213 20:25:19.016691 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:25:19.059984 kubelet[2351]: E0213 20:25:19.059955 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="3.2s" Feb 13 20:25:19.294556 kubelet[2351]: I0213 20:25:19.294395 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:25:20.308958 kubelet[2351]: I0213 20:25:20.308777 2351 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 20:25:20.962904 kubelet[2351]: I0213 20:25:20.962681 2351 apiserver.go:52] "Watching apiserver" Feb 13 20:25:21.057203 kubelet[2351]: I0213 20:25:21.057177 2351 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:25:22.178830 systemd[1]: Reloading requested from client PID 2627 ('systemctl') (unit session-9.scope)... Feb 13 20:25:22.178843 systemd[1]: Reloading... Feb 13 20:25:22.237664 zram_generator::config[2669]: No configuration found. Feb 13 20:25:22.308174 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 13 20:25:22.322999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:25:22.374090 systemd[1]: Reloading finished in 194 ms. Feb 13 20:25:22.399350 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:22.402836 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:25:22.403018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:22.407764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:25:22.597466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:25:22.601830 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:25:22.713670 kubelet[2732]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:25:22.713670 kubelet[2732]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:25:22.713670 kubelet[2732]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:25:22.713914 kubelet[2732]: I0213 20:25:22.713727 2732 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:25:22.721637 kubelet[2732]: I0213 20:25:22.721424 2732 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:25:22.721637 kubelet[2732]: I0213 20:25:22.721443 2732 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:25:22.721637 kubelet[2732]: I0213 20:25:22.721589 2732 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:25:22.722512 kubelet[2732]: I0213 20:25:22.722473 2732 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:25:22.727950 kubelet[2732]: I0213 20:25:22.727816 2732 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:25:22.731977 kubelet[2732]: E0213 20:25:22.731942 2732 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:25:22.731977 kubelet[2732]: I0213 20:25:22.731974 2732 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:25:22.735696 kubelet[2732]: I0213 20:25:22.735520 2732 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:25:22.739093 kubelet[2732]: I0213 20:25:22.738988 2732 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:25:22.739166 kubelet[2732]: I0213 20:25:22.739115 2732 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:25:22.739389 kubelet[2732]: I0213 20:25:22.739138 2732 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:25:22.739456 kubelet[2732]: I0213 20:25:22.739393 2732 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:25:22.739456 kubelet[2732]: I0213 20:25:22.739402 2732 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:25:22.739456 kubelet[2732]: I0213 20:25:22.739424 2732 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:25:22.746500 kubelet[2732]: I0213 20:25:22.745144 2732 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:25:22.746500 kubelet[2732]: I0213 20:25:22.745169 2732 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:25:22.746500 kubelet[2732]: I0213 20:25:22.745220 2732 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:25:22.746500 kubelet[2732]: I0213 20:25:22.745231 2732 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:25:22.746811 kubelet[2732]: I0213 20:25:22.746793 2732 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:25:22.747594 kubelet[2732]: I0213 20:25:22.747102 2732 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:25:22.747594 kubelet[2732]: I0213 20:25:22.747497 2732 server.go:1269] "Started kubelet" Feb 13 20:25:22.750772 kubelet[2732]: I0213 20:25:22.750750 2732 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:25:22.753501 kubelet[2732]: I0213 20:25:22.753383 2732 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:25:22.758109 kubelet[2732]: I0213 20:25:22.758093 2732 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:25:22.759088 kubelet[2732]: I0213 20:25:22.759063 2732 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:25:22.759580 kubelet[2732]: I0213 20:25:22.759570 2732 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:25:22.759726 kubelet[2732]: I0213 20:25:22.759719 2732 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:25:22.759992 kubelet[2732]: I0213 20:25:22.759957 2732 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:25:22.760096 kubelet[2732]: I0213 20:25:22.760084 2732 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:25:22.760205 kubelet[2732]: I0213 20:25:22.760193 2732 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:25:22.761552 kubelet[2732]: I0213 20:25:22.761536 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:25:22.762413 kubelet[2732]: I0213 20:25:22.762313 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:25:22.762413 kubelet[2732]: I0213 20:25:22.762332 2732 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:25:22.762413 kubelet[2732]: I0213 20:25:22.762342 2732 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:25:22.762413 kubelet[2732]: E0213 20:25:22.762364 2732 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:25:22.762899 kubelet[2732]: I0213 20:25:22.762797 2732 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:25:22.762899 kubelet[2732]: I0213 20:25:22.762863 2732 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:25:22.768857 kubelet[2732]: I0213 20:25:22.768833 2732 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:25:22.795415 kubelet[2732]: I0213 20:25:22.795399 2732 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:25:22.795757 kubelet[2732]: I0213 20:25:22.795549 2732 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:25:22.795757 kubelet[2732]: I0213 20:25:22.795564 2732 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:25:22.795757 kubelet[2732]: I0213 20:25:22.795692 2732 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:25:22.795757 kubelet[2732]: I0213 20:25:22.795700 2732 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:25:22.795757 kubelet[2732]: I0213 20:25:22.795712 2732 policy_none.go:49] "None policy: Start" Feb 13 20:25:22.796430 kubelet[2732]: I0213 20:25:22.796412 2732 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:25:22.796483 kubelet[2732]: I0213 20:25:22.796463 2732 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:25:22.796650 kubelet[2732]: I0213 20:25:22.796637 2732 state_mem.go:75] "Updated machine memory state" Feb 13 20:25:22.800378 kubelet[2732]: I0213 20:25:22.799853 2732 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:25:22.800378 kubelet[2732]: I0213 20:25:22.799964 2732 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:25:22.800378 kubelet[2732]: I0213 20:25:22.799971 2732 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:25:22.800378 kubelet[2732]: I0213 20:25:22.800106 2732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:25:22.904441 kubelet[2732]: I0213 20:25:22.903245 2732 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:25:22.912287 kubelet[2732]: I0213 20:25:22.912188 2732 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 20:25:22.912287 kubelet[2732]: I0213 20:25:22.912247 2732 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 20:25:23.061165 kubelet[2732]: I0213 20:25:23.061025 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:25:23.061165 kubelet[2732]: I0213 20:25:23.061051 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:23.061165 kubelet[2732]: I0213 20:25:23.061063 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:23.061165 kubelet[2732]: I0213 20:25:23.061073 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:23.061165 kubelet[2732]: I0213 20:25:23.061082 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77a1a924982be3ed34b867b542c7ff75-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77a1a924982be3ed34b867b542c7ff75\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:25:23.061322 kubelet[2732]: I0213 20:25:23.061091 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77a1a924982be3ed34b867b542c7ff75-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77a1a924982be3ed34b867b542c7ff75\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:25:23.061322 kubelet[2732]: I0213 20:25:23.061101 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77a1a924982be3ed34b867b542c7ff75-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77a1a924982be3ed34b867b542c7ff75\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:25:23.061322 kubelet[2732]: I0213 20:25:23.061109 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:23.061322 kubelet[2732]: I0213 20:25:23.061117 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:23.746082 kubelet[2732]: I0213 20:25:23.746008 2732 apiserver.go:52] "Watching apiserver" Feb 13 20:25:23.760722 kubelet[2732]: I0213 20:25:23.760690 2732 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:25:23.813877 kubelet[2732]: E0213 20:25:23.813667 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:25:23.814774 kubelet[2732]: I0213 20:25:23.814726 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.814715095 podStartE2EDuration="1.814715095s" podCreationTimestamp="2025-02-13 20:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:25:23.81376534 +0000 UTC m=+1.125888403" watchObservedRunningTime="2025-02-13 20:25:23.814715095 +0000 UTC m=+1.126838158" Feb 13 20:25:23.828129 kubelet[2732]: E0213 20:25:23.827974 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:25:23.948331 kubelet[2732]: I0213 20:25:23.948291 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.948278613 podStartE2EDuration="1.948278613s" podCreationTimestamp="2025-02-13 20:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:25:23.871716941 +0000 UTC m=+1.183840005" watchObservedRunningTime="2025-02-13 20:25:23.948278613 +0000 UTC m=+1.260401678" Feb 13 20:25:23.996608 kubelet[2732]: I0213 20:25:23.996444 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.996429791 podStartE2EDuration="1.996429791s" podCreationTimestamp="2025-02-13 20:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:25:23.948922673 +0000 UTC m=+1.261045744" watchObservedRunningTime="2025-02-13 20:25:23.996429791 +0000 UTC m=+1.308552852" Feb 13 20:25:26.281251 kubelet[2732]: I0213 20:25:26.281230 2732 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:25:26.282209 containerd[1529]: time="2025-02-13T20:25:26.281677664Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:25:26.282368 kubelet[2732]: I0213 20:25:26.281786 2732 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:25:26.991837 systemd[1]: Created slice kubepods-besteffort-pod76c62ae1_8557_47f5_8e71_64777f820060.slice - libcontainer container kubepods-besteffort-pod76c62ae1_8557_47f5_8e71_64777f820060.slice. Feb 13 20:25:27.085600 kubelet[2732]: I0213 20:25:27.085577 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76c62ae1-8557-47f5-8e71-64777f820060-kube-proxy\") pod \"kube-proxy-cqtn2\" (UID: \"76c62ae1-8557-47f5-8e71-64777f820060\") " pod="kube-system/kube-proxy-cqtn2" Feb 13 20:25:27.085758 kubelet[2732]: I0213 20:25:27.085748 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76c62ae1-8557-47f5-8e71-64777f820060-xtables-lock\") pod \"kube-proxy-cqtn2\" (UID: \"76c62ae1-8557-47f5-8e71-64777f820060\") " pod="kube-system/kube-proxy-cqtn2" Feb 13 20:25:27.085826 kubelet[2732]: I0213 20:25:27.085815 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkstt\" (UniqueName: \"kubernetes.io/projected/76c62ae1-8557-47f5-8e71-64777f820060-kube-api-access-jkstt\") pod \"kube-proxy-cqtn2\" (UID: \"76c62ae1-8557-47f5-8e71-64777f820060\") " pod="kube-system/kube-proxy-cqtn2" Feb 13 20:25:27.085898 kubelet[2732]: I0213 20:25:27.085890 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76c62ae1-8557-47f5-8e71-64777f820060-lib-modules\") pod \"kube-proxy-cqtn2\" (UID: \"76c62ae1-8557-47f5-8e71-64777f820060\") " pod="kube-system/kube-proxy-cqtn2" Feb 13 20:25:27.227914 systemd[1]: Created slice kubepods-besteffort-pod041ba18f_33aa_4315_a1d7_756cb6544460.slice - libcontainer container kubepods-besteffort-pod041ba18f_33aa_4315_a1d7_756cb6544460.slice. Feb 13 20:25:27.287477 kubelet[2732]: I0213 20:25:27.287358 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn4x6\" (UniqueName: \"kubernetes.io/projected/041ba18f-33aa-4315-a1d7-756cb6544460-kube-api-access-dn4x6\") pod \"tigera-operator-76c4976dd7-bgfvs\" (UID: \"041ba18f-33aa-4315-a1d7-756cb6544460\") " pod="tigera-operator/tigera-operator-76c4976dd7-bgfvs" Feb 13 20:25:27.287477 kubelet[2732]: I0213 20:25:27.287393 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/041ba18f-33aa-4315-a1d7-756cb6544460-var-lib-calico\") pod \"tigera-operator-76c4976dd7-bgfvs\" (UID: \"041ba18f-33aa-4315-a1d7-756cb6544460\") " pod="tigera-operator/tigera-operator-76c4976dd7-bgfvs" Feb 13 20:25:27.299147 containerd[1529]: time="2025-02-13T20:25:27.299125736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqtn2,Uid:76c62ae1-8557-47f5-8e71-64777f820060,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:27.338673 containerd[1529]: time="2025-02-13T20:25:27.337916306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:27.338673 containerd[1529]: time="2025-02-13T20:25:27.338602029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:27.338673 containerd[1529]: time="2025-02-13T20:25:27.338626117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:27.338842 containerd[1529]: time="2025-02-13T20:25:27.338681008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:27.358765 systemd[1]: Started cri-containerd-992b3a834a0f01312add47c40c650776e6e1e972f3f2cc999478077e883636f5.scope - libcontainer container 992b3a834a0f01312add47c40c650776e6e1e972f3f2cc999478077e883636f5. Feb 13 20:25:27.376776 containerd[1529]: time="2025-02-13T20:25:27.376754093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqtn2,Uid:76c62ae1-8557-47f5-8e71-64777f820060,Namespace:kube-system,Attempt:0,} returns sandbox id \"992b3a834a0f01312add47c40c650776e6e1e972f3f2cc999478077e883636f5\"" Feb 13 20:25:27.379917 containerd[1529]: time="2025-02-13T20:25:27.379894065Z" level=info msg="CreateContainer within sandbox \"992b3a834a0f01312add47c40c650776e6e1e972f3f2cc999478077e883636f5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:25:27.413488 sudo[1843]: pam_unix(sudo:session): session closed for user root Feb 13 20:25:27.426169 sshd[1840]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:27.428724 systemd[1]: sshd@6-139.178.70.108:22-139.178.89.65:47940.service: Deactivated successfully. Feb 13 20:25:27.430300 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:25:27.430441 systemd[1]: session-9.scope: Consumed 2.723s CPU time, 141.9M memory peak, 0B memory swap peak. Feb 13 20:25:27.430818 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:25:27.431402 systemd-logind[1510]: Removed session 9. Feb 13 20:25:27.461274 containerd[1529]: time="2025-02-13T20:25:27.461197125Z" level=info msg="CreateContainer within sandbox \"992b3a834a0f01312add47c40c650776e6e1e972f3f2cc999478077e883636f5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3ed0285491e494a11f9e3e4f8ec48ea7c49b9363cab242e60e542179a6a5bf1\"" Feb 13 20:25:27.461801 containerd[1529]: time="2025-02-13T20:25:27.461780318Z" level=info msg="StartContainer for \"e3ed0285491e494a11f9e3e4f8ec48ea7c49b9363cab242e60e542179a6a5bf1\"" Feb 13 20:25:27.481726 systemd[1]: Started cri-containerd-e3ed0285491e494a11f9e3e4f8ec48ea7c49b9363cab242e60e542179a6a5bf1.scope - libcontainer container e3ed0285491e494a11f9e3e4f8ec48ea7c49b9363cab242e60e542179a6a5bf1. Feb 13 20:25:27.506740 containerd[1529]: time="2025-02-13T20:25:27.506701543Z" level=info msg="StartContainer for \"e3ed0285491e494a11f9e3e4f8ec48ea7c49b9363cab242e60e542179a6a5bf1\" returns successfully" Feb 13 20:25:27.531553 containerd[1529]: time="2025-02-13T20:25:27.531311587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-bgfvs,Uid:041ba18f-33aa-4315-a1d7-756cb6544460,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:25:27.564689 containerd[1529]: time="2025-02-13T20:25:27.564508746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:27.564689 containerd[1529]: time="2025-02-13T20:25:27.564564085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:27.564689 containerd[1529]: time="2025-02-13T20:25:27.564575379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:27.565438 containerd[1529]: time="2025-02-13T20:25:27.564842166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:27.580758 systemd[1]: Started cri-containerd-2ac64ea3e7f1ae0e19c4f61ed53d1f0f1b8f4ab55ecf34772e04b2d30769f51b.scope - libcontainer container 2ac64ea3e7f1ae0e19c4f61ed53d1f0f1b8f4ab55ecf34772e04b2d30769f51b. Feb 13 20:25:27.613839 containerd[1529]: time="2025-02-13T20:25:27.613805451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-bgfvs,Uid:041ba18f-33aa-4315-a1d7-756cb6544460,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2ac64ea3e7f1ae0e19c4f61ed53d1f0f1b8f4ab55ecf34772e04b2d30769f51b\"" Feb 13 20:25:27.616571 containerd[1529]: time="2025-02-13T20:25:27.616514347Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:25:28.603545 kubelet[2732]: I0213 20:25:28.603495 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cqtn2" podStartSLOduration=2.603475367 podStartE2EDuration="2.603475367s" podCreationTimestamp="2025-02-13 20:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:25:27.812807388 +0000 UTC m=+5.124930469" watchObservedRunningTime="2025-02-13 20:25:28.603475367 +0000 UTC m=+5.915598672" Feb 13 20:25:29.129961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3127679824.mount: Deactivated successfully. Feb 13 20:25:29.797841 containerd[1529]: time="2025-02-13T20:25:29.797814330Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:29.804161 containerd[1529]: time="2025-02-13T20:25:29.804114371Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:25:29.811435 containerd[1529]: time="2025-02-13T20:25:29.811392003Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:29.822541 containerd[1529]: time="2025-02-13T20:25:29.822499691Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:29.823205 containerd[1529]: time="2025-02-13T20:25:29.822896500Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.206347797s" Feb 13 20:25:29.823205 containerd[1529]: time="2025-02-13T20:25:29.822919145Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:25:29.867688 containerd[1529]: time="2025-02-13T20:25:29.867634144Z" level=info msg="CreateContainer within sandbox \"2ac64ea3e7f1ae0e19c4f61ed53d1f0f1b8f4ab55ecf34772e04b2d30769f51b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:25:29.874361 containerd[1529]: time="2025-02-13T20:25:29.874275130Z" level=info msg="CreateContainer within sandbox \"2ac64ea3e7f1ae0e19c4f61ed53d1f0f1b8f4ab55ecf34772e04b2d30769f51b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5d4c58d4a0b0112a609de1ee1eea617e95934f6871792acf58d100223cb5963\"" Feb 13 20:25:29.877008 containerd[1529]: time="2025-02-13T20:25:29.876237783Z" level=info msg="StartContainer for \"a5d4c58d4a0b0112a609de1ee1eea617e95934f6871792acf58d100223cb5963\"" Feb 13 20:25:29.911840 systemd[1]: Started cri-containerd-a5d4c58d4a0b0112a609de1ee1eea617e95934f6871792acf58d100223cb5963.scope - libcontainer container a5d4c58d4a0b0112a609de1ee1eea617e95934f6871792acf58d100223cb5963. Feb 13 20:25:29.931267 containerd[1529]: time="2025-02-13T20:25:29.931228789Z" level=info msg="StartContainer for \"a5d4c58d4a0b0112a609de1ee1eea617e95934f6871792acf58d100223cb5963\" returns successfully" Feb 13 20:25:31.641324 kubelet[2732]: I0213 20:25:31.641242 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-bgfvs" podStartSLOduration=2.389791938 podStartE2EDuration="4.641230747s" podCreationTimestamp="2025-02-13 20:25:27 +0000 UTC" firstStartedPulling="2025-02-13 20:25:27.61491092 +0000 UTC m=+4.927033987" lastFinishedPulling="2025-02-13 20:25:29.866349727 +0000 UTC m=+7.178472796" observedRunningTime="2025-02-13 20:25:30.81692049 +0000 UTC m=+8.129043563" watchObservedRunningTime="2025-02-13 20:25:31.641230747 +0000 UTC m=+8.953353819" Feb 13 20:25:33.475418 systemd[1]: Created slice kubepods-besteffort-pod1f333b1f_a86e_42da_b73b_bd370b357b94.slice - libcontainer container kubepods-besteffort-pod1f333b1f_a86e_42da_b73b_bd370b357b94.slice. Feb 13 20:25:33.593506 systemd[1]: Created slice kubepods-besteffort-pod390748e5_ee82_42f4_af7b_760a1117fdd3.slice - libcontainer container kubepods-besteffort-pod390748e5_ee82_42f4_af7b_760a1117fdd3.slice. Feb 13 20:25:33.667395 kubelet[2732]: I0213 20:25:33.667370 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f333b1f-a86e-42da-b73b-bd370b357b94-tigera-ca-bundle\") pod \"calico-typha-5db5b7cd5-bq9q7\" (UID: \"1f333b1f-a86e-42da-b73b-bd370b357b94\") " pod="calico-system/calico-typha-5db5b7cd5-bq9q7" Feb 13 20:25:33.667757 kubelet[2732]: I0213 20:25:33.667679 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb4nk\" (UniqueName: \"kubernetes.io/projected/390748e5-ee82-42f4-af7b-760a1117fdd3-kube-api-access-rb4nk\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.667757 kubelet[2732]: I0213 20:25:33.667698 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/390748e5-ee82-42f4-af7b-760a1117fdd3-flexvol-driver-host\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.667757 kubelet[2732]: I0213 20:25:33.667709 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/390748e5-ee82-42f4-af7b-760a1117fdd3-tigera-ca-bundle\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.667757 kubelet[2732]: I0213 20:25:33.667719 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/390748e5-ee82-42f4-af7b-760a1117fdd3-var-run-calico\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.667757 kubelet[2732]: I0213 20:25:33.667728 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/390748e5-ee82-42f4-af7b-760a1117fdd3-lib-modules\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.667932 kubelet[2732]: I0213 20:25:33.667759 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/390748e5-ee82-42f4-af7b-760a1117fdd3-policysync\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.667932 kubelet[2732]: I0213 20:25:33.667778 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/390748e5-ee82-42f4-af7b-760a1117fdd3-cni-bin-dir\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.667932 kubelet[2732]: I0213 20:25:33.667796 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/390748e5-ee82-42f4-af7b-760a1117fdd3-var-lib-calico\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.667932 kubelet[2732]: I0213 20:25:33.667805 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/390748e5-ee82-42f4-af7b-760a1117fdd3-cni-net-dir\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.667932 kubelet[2732]: I0213 20:25:33.667816 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1f333b1f-a86e-42da-b73b-bd370b357b94-typha-certs\") pod \"calico-typha-5db5b7cd5-bq9q7\" (UID: \"1f333b1f-a86e-42da-b73b-bd370b357b94\") " pod="calico-system/calico-typha-5db5b7cd5-bq9q7" Feb 13 20:25:33.668013 kubelet[2732]: I0213 20:25:33.667827 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/390748e5-ee82-42f4-af7b-760a1117fdd3-xtables-lock\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.668013 kubelet[2732]: I0213 20:25:33.667838 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/390748e5-ee82-42f4-af7b-760a1117fdd3-cni-log-dir\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.668013 kubelet[2732]: I0213 20:25:33.667853 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/390748e5-ee82-42f4-af7b-760a1117fdd3-node-certs\") pod \"calico-node-pwhfg\" (UID: \"390748e5-ee82-42f4-af7b-760a1117fdd3\") " pod="calico-system/calico-node-pwhfg" Feb 13 20:25:33.668013 kubelet[2732]: I0213 20:25:33.667877 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5c2m\" (UniqueName: \"kubernetes.io/projected/1f333b1f-a86e-42da-b73b-bd370b357b94-kube-api-access-k5c2m\") pod \"calico-typha-5db5b7cd5-bq9q7\" (UID: \"1f333b1f-a86e-42da-b73b-bd370b357b94\") " pod="calico-system/calico-typha-5db5b7cd5-bq9q7" Feb 13 20:25:33.796794 kubelet[2732]: E0213 20:25:33.795721 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:33.813674 kubelet[2732]: E0213 20:25:33.797500 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.813674 kubelet[2732]: W0213 20:25:33.797510 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.813674 kubelet[2732]: E0213 20:25:33.797525 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.813674 kubelet[2732]: E0213 20:25:33.797961 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.813674 kubelet[2732]: W0213 20:25:33.797967 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.813674 kubelet[2732]: E0213 20:25:33.797973 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.859634 kubelet[2732]: E0213 20:25:33.859301 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.859634 kubelet[2732]: W0213 20:25:33.859318 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.859634 kubelet[2732]: E0213 20:25:33.859332 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.866015 kubelet[2732]: E0213 20:25:33.865995 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.866229 kubelet[2732]: W0213 20:25:33.866213 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.866393 kubelet[2732]: E0213 20:25:33.866280 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.876124 kubelet[2732]: E0213 20:25:33.876056 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.876124 kubelet[2732]: W0213 20:25:33.876077 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.876124 kubelet[2732]: E0213 20:25:33.876094 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877606 kubelet[2732]: E0213 20:25:33.876229 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877606 kubelet[2732]: W0213 20:25:33.876240 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877606 kubelet[2732]: E0213 20:25:33.876246 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877606 kubelet[2732]: E0213 20:25:33.876350 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877606 kubelet[2732]: W0213 20:25:33.876357 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877606 kubelet[2732]: E0213 20:25:33.876365 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877606 kubelet[2732]: E0213 20:25:33.876476 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877606 kubelet[2732]: W0213 20:25:33.876482 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877606 kubelet[2732]: E0213 20:25:33.876489 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877606 kubelet[2732]: E0213 20:25:33.876601 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877833 kubelet[2732]: W0213 20:25:33.876607 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877833 kubelet[2732]: E0213 20:25:33.876613 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877833 kubelet[2732]: E0213 20:25:33.876719 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877833 kubelet[2732]: W0213 20:25:33.876726 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877833 kubelet[2732]: E0213 20:25:33.876733 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877833 kubelet[2732]: E0213 20:25:33.876834 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877833 kubelet[2732]: W0213 20:25:33.876840 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877833 kubelet[2732]: E0213 20:25:33.876847 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877833 kubelet[2732]: E0213 20:25:33.876939 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877833 kubelet[2732]: W0213 20:25:33.876944 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877991 kubelet[2732]: E0213 20:25:33.876949 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877991 kubelet[2732]: E0213 20:25:33.877143 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877991 kubelet[2732]: W0213 20:25:33.877150 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877991 kubelet[2732]: E0213 20:25:33.877157 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877991 kubelet[2732]: E0213 20:25:33.877250 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877991 kubelet[2732]: W0213 20:25:33.877255 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877991 kubelet[2732]: E0213 20:25:33.877261 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.877991 kubelet[2732]: E0213 20:25:33.877352 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.877991 kubelet[2732]: W0213 20:25:33.877363 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.877991 kubelet[2732]: E0213 20:25:33.877374 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.878135 kubelet[2732]: E0213 20:25:33.877485 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.878135 kubelet[2732]: W0213 20:25:33.877490 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.878135 kubelet[2732]: E0213 20:25:33.877496 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.878306 kubelet[2732]: E0213 20:25:33.878250 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.878306 kubelet[2732]: W0213 20:25:33.878256 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.878306 kubelet[2732]: E0213 20:25:33.878262 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.883845 kubelet[2732]: E0213 20:25:33.878359 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.883845 kubelet[2732]: W0213 20:25:33.878364 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.883845 kubelet[2732]: E0213 20:25:33.878369 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.883845 kubelet[2732]: E0213 20:25:33.878455 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.883845 kubelet[2732]: W0213 20:25:33.878460 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.883845 kubelet[2732]: E0213 20:25:33.878464 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.883845 kubelet[2732]: E0213 20:25:33.878577 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.883845 kubelet[2732]: W0213 20:25:33.878581 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.883845 kubelet[2732]: E0213 20:25:33.878586 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.883845 kubelet[2732]: E0213 20:25:33.878722 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.884004 kubelet[2732]: W0213 20:25:33.878726 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.884004 kubelet[2732]: E0213 20:25:33.878732 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.884004 kubelet[2732]: E0213 20:25:33.878814 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.884004 kubelet[2732]: W0213 20:25:33.878818 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.884004 kubelet[2732]: E0213 20:25:33.878824 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.884004 kubelet[2732]: E0213 20:25:33.878906 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.884004 kubelet[2732]: W0213 20:25:33.878910 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.884004 kubelet[2732]: E0213 20:25:33.878915 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.884004 kubelet[2732]: E0213 20:25:33.879001 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.884004 kubelet[2732]: W0213 20:25:33.879005 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.895295 kubelet[2732]: E0213 20:25:33.879010 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.902790 containerd[1529]: time="2025-02-13T20:25:33.902761990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db5b7cd5-bq9q7,Uid:1f333b1f-a86e-42da-b73b-bd370b357b94,Namespace:calico-system,Attempt:0,}" Feb 13 20:25:33.903393 containerd[1529]: time="2025-02-13T20:25:33.903360460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pwhfg,Uid:390748e5-ee82-42f4-af7b-760a1117fdd3,Namespace:calico-system,Attempt:0,}" Feb 13 20:25:33.970830 kubelet[2732]: E0213 20:25:33.970753 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.970830 kubelet[2732]: W0213 20:25:33.970770 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.970830 kubelet[2732]: E0213 20:25:33.970787 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.970830 kubelet[2732]: I0213 20:25:33.970815 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b708f66b-cea3-4952-b290-318b6ed2fb1d-kubelet-dir\") pod \"csi-node-driver-cntw8\" (UID: \"b708f66b-cea3-4952-b290-318b6ed2fb1d\") " pod="calico-system/csi-node-driver-cntw8" Feb 13 20:25:33.971403 kubelet[2732]: E0213 20:25:33.970956 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.971403 kubelet[2732]: W0213 20:25:33.970965 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.971403 kubelet[2732]: E0213 20:25:33.970978 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.971403 kubelet[2732]: I0213 20:25:33.970992 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b708f66b-cea3-4952-b290-318b6ed2fb1d-registration-dir\") pod \"csi-node-driver-cntw8\" (UID: \"b708f66b-cea3-4952-b290-318b6ed2fb1d\") " pod="calico-system/csi-node-driver-cntw8" Feb 13 20:25:33.971889 kubelet[2732]: E0213 20:25:33.971681 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.971889 kubelet[2732]: W0213 20:25:33.971691 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.971889 kubelet[2732]: E0213 20:25:33.971705 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.971889 kubelet[2732]: I0213 20:25:33.971720 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlw9\" (UniqueName: \"kubernetes.io/projected/b708f66b-cea3-4952-b290-318b6ed2fb1d-kube-api-access-mqlw9\") pod \"csi-node-driver-cntw8\" (UID: \"b708f66b-cea3-4952-b290-318b6ed2fb1d\") " pod="calico-system/csi-node-driver-cntw8" Feb 13 20:25:33.972172 kubelet[2732]: E0213 20:25:33.972006 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.972172 kubelet[2732]: W0213 20:25:33.972013 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.972172 kubelet[2732]: E0213 20:25:33.972025 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.972753 kubelet[2732]: E0213 20:25:33.972565 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.972753 kubelet[2732]: W0213 20:25:33.972575 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.972753 kubelet[2732]: E0213 20:25:33.972587 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.973004 kubelet[2732]: E0213 20:25:33.972848 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.973004 kubelet[2732]: W0213 20:25:33.972859 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.973004 kubelet[2732]: E0213 20:25:33.972873 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.973492 kubelet[2732]: E0213 20:25:33.973024 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.973492 kubelet[2732]: W0213 20:25:33.973031 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.973492 kubelet[2732]: E0213 20:25:33.973040 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.973492 kubelet[2732]: E0213 20:25:33.973159 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.973492 kubelet[2732]: W0213 20:25:33.973166 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.973492 kubelet[2732]: E0213 20:25:33.973178 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.973492 kubelet[2732]: I0213 20:25:33.973196 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b708f66b-cea3-4952-b290-318b6ed2fb1d-varrun\") pod \"csi-node-driver-cntw8\" (UID: \"b708f66b-cea3-4952-b290-318b6ed2fb1d\") " pod="calico-system/csi-node-driver-cntw8" Feb 13 20:25:33.974062 kubelet[2732]: E0213 20:25:33.973895 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.974062 kubelet[2732]: W0213 20:25:33.973903 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.974062 kubelet[2732]: E0213 20:25:33.973916 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.975144 kubelet[2732]: E0213 20:25:33.975003 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.975144 kubelet[2732]: W0213 20:25:33.975019 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.975144 kubelet[2732]: E0213 20:25:33.975037 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.975144 kubelet[2732]: I0213 20:25:33.975077 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b708f66b-cea3-4952-b290-318b6ed2fb1d-socket-dir\") pod \"csi-node-driver-cntw8\" (UID: \"b708f66b-cea3-4952-b290-318b6ed2fb1d\") " pod="calico-system/csi-node-driver-cntw8" Feb 13 20:25:33.975257 kubelet[2732]: E0213 20:25:33.975240 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.975257 kubelet[2732]: W0213 20:25:33.975246 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.975257 kubelet[2732]: E0213 20:25:33.975255 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.975388 kubelet[2732]: E0213 20:25:33.975372 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.975388 kubelet[2732]: W0213 20:25:33.975379 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.975427 kubelet[2732]: E0213 20:25:33.975387 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.976255 kubelet[2732]: E0213 20:25:33.975582 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.976255 kubelet[2732]: W0213 20:25:33.975591 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.976255 kubelet[2732]: E0213 20:25:33.975603 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.976255 kubelet[2732]: E0213 20:25:33.975827 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.976255 kubelet[2732]: W0213 20:25:33.975835 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.976255 kubelet[2732]: E0213 20:25:33.975843 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.976255 kubelet[2732]: E0213 20:25:33.975949 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:33.976255 kubelet[2732]: W0213 20:25:33.975955 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:33.976255 kubelet[2732]: E0213 20:25:33.975963 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:33.984104 containerd[1529]: time="2025-02-13T20:25:33.983628279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:33.984104 containerd[1529]: time="2025-02-13T20:25:33.983685326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:33.984104 containerd[1529]: time="2025-02-13T20:25:33.983697577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:33.984104 containerd[1529]: time="2025-02-13T20:25:33.983782142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:34.004697 systemd[1]: Started cri-containerd-e8493478a628dfa209835db1c658346ef96113737542f14d019f3544af74f434.scope - libcontainer container e8493478a628dfa209835db1c658346ef96113737542f14d019f3544af74f434. Feb 13 20:25:34.035687 containerd[1529]: time="2025-02-13T20:25:34.034983333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:25:34.035687 containerd[1529]: time="2025-02-13T20:25:34.035051575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:25:34.035687 containerd[1529]: time="2025-02-13T20:25:34.035062745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:34.035687 containerd[1529]: time="2025-02-13T20:25:34.035151376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:25:34.049881 systemd[1]: Started cri-containerd-2dd46dabd6541d5c441aaf1bc7f6234431dc5dcc64836adc8fb7463ce036f1ad.scope - libcontainer container 2dd46dabd6541d5c441aaf1bc7f6234431dc5dcc64836adc8fb7463ce036f1ad. Feb 13 20:25:34.072631 containerd[1529]: time="2025-02-13T20:25:34.072322436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db5b7cd5-bq9q7,Uid:1f333b1f-a86e-42da-b73b-bd370b357b94,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8493478a628dfa209835db1c658346ef96113737542f14d019f3544af74f434\"" Feb 13 20:25:34.073569 containerd[1529]: time="2025-02-13T20:25:34.073556633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:25:34.077440 containerd[1529]: time="2025-02-13T20:25:34.077417681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pwhfg,Uid:390748e5-ee82-42f4-af7b-760a1117fdd3,Namespace:calico-system,Attempt:0,} returns sandbox id \"2dd46dabd6541d5c441aaf1bc7f6234431dc5dcc64836adc8fb7463ce036f1ad\"" Feb 13 20:25:34.081638 kubelet[2732]: E0213 20:25:34.081601 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.081638 kubelet[2732]: W0213 20:25:34.081629 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.081790 kubelet[2732]: E0213 20:25:34.081645 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.081827 kubelet[2732]: E0213 20:25:34.081793 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.081827 kubelet[2732]: W0213 20:25:34.081798 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.081827 kubelet[2732]: E0213 20:25:34.081806 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.081909 kubelet[2732]: E0213 20:25:34.081893 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.081909 kubelet[2732]: W0213 20:25:34.081898 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.081909 kubelet[2732]: E0213 20:25:34.081903 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.082049 kubelet[2732]: E0213 20:25:34.081989 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.082049 kubelet[2732]: W0213 20:25:34.081997 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.082049 kubelet[2732]: E0213 20:25:34.082004 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.082150 kubelet[2732]: E0213 20:25:34.082143 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.082250 kubelet[2732]: W0213 20:25:34.082193 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.082250 kubelet[2732]: E0213 20:25:34.082206 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.082341 kubelet[2732]: E0213 20:25:34.082336 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.082409 kubelet[2732]: W0213 20:25:34.082373 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.082409 kubelet[2732]: E0213 20:25:34.082387 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.082568 kubelet[2732]: E0213 20:25:34.082541 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.082568 kubelet[2732]: W0213 20:25:34.082547 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.082739 kubelet[2732]: E0213 20:25:34.082671 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.082885 kubelet[2732]: E0213 20:25:34.082880 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.082946 kubelet[2732]: W0213 20:25:34.082917 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.082946 kubelet[2732]: E0213 20:25:34.082930 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.083029 kubelet[2732]: E0213 20:25:34.083017 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.083029 kubelet[2732]: W0213 20:25:34.083026 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.083085 kubelet[2732]: E0213 20:25:34.083035 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.083126 kubelet[2732]: E0213 20:25:34.083115 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.083126 kubelet[2732]: W0213 20:25:34.083122 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.083166 kubelet[2732]: E0213 20:25:34.083127 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.083207 kubelet[2732]: E0213 20:25:34.083200 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.083207 kubelet[2732]: W0213 20:25:34.083205 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.083289 kubelet[2732]: E0213 20:25:34.083231 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.083289 kubelet[2732]: E0213 20:25:34.083282 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.083289 kubelet[2732]: W0213 20:25:34.083286 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.083406 kubelet[2732]: E0213 20:25:34.083304 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.083406 kubelet[2732]: E0213 20:25:34.083360 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.083406 kubelet[2732]: W0213 20:25:34.083363 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.083406 kubelet[2732]: E0213 20:25:34.083377 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.083499 kubelet[2732]: E0213 20:25:34.083438 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.083499 kubelet[2732]: W0213 20:25:34.083443 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.083499 kubelet[2732]: E0213 20:25:34.083455 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.094847 kubelet[2732]: E0213 20:25:34.083531 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.094847 kubelet[2732]: W0213 20:25:34.083535 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.094847 kubelet[2732]: E0213 20:25:34.083544 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.094847 kubelet[2732]: E0213 20:25:34.083676 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.094847 kubelet[2732]: W0213 20:25:34.083681 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.094847 kubelet[2732]: E0213 20:25:34.083690 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.094847 kubelet[2732]: E0213 20:25:34.083841 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.094847 kubelet[2732]: W0213 20:25:34.083846 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.094847 kubelet[2732]: E0213 20:25:34.083856 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.094847 kubelet[2732]: E0213 20:25:34.083953 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.095002 kubelet[2732]: W0213 20:25:34.083958 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.095002 kubelet[2732]: E0213 20:25:34.083967 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.095002 kubelet[2732]: E0213 20:25:34.084068 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.095002 kubelet[2732]: W0213 20:25:34.084073 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.095002 kubelet[2732]: E0213 20:25:34.084081 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.095002 kubelet[2732]: E0213 20:25:34.084213 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.095002 kubelet[2732]: W0213 20:25:34.084218 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.095002 kubelet[2732]: E0213 20:25:34.084226 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.095002 kubelet[2732]: E0213 20:25:34.084328 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.095002 kubelet[2732]: W0213 20:25:34.084333 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.095155 kubelet[2732]: E0213 20:25:34.084341 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.095155 kubelet[2732]: E0213 20:25:34.084450 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.095155 kubelet[2732]: W0213 20:25:34.084456 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.095155 kubelet[2732]: E0213 20:25:34.084464 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.095155 kubelet[2732]: E0213 20:25:34.084653 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.095155 kubelet[2732]: W0213 20:25:34.084659 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.095155 kubelet[2732]: E0213 20:25:34.084665 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.095155 kubelet[2732]: E0213 20:25:34.084765 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.095155 kubelet[2732]: W0213 20:25:34.084769 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.095155 kubelet[2732]: E0213 20:25:34.084775 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.095357 kubelet[2732]: E0213 20:25:34.094913 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.095357 kubelet[2732]: W0213 20:25:34.094919 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.095357 kubelet[2732]: E0213 20:25:34.094928 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:34.141625 kubelet[2732]: E0213 20:25:34.141565 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:34.141625 kubelet[2732]: W0213 20:25:34.141579 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:34.141625 kubelet[2732]: E0213 20:25:34.141591 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:35.763017 kubelet[2732]: E0213 20:25:35.762971 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:36.243292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365829704.mount: Deactivated successfully. Feb 13 20:25:37.293830 containerd[1529]: time="2025-02-13T20:25:37.293800847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:37.305269 containerd[1529]: time="2025-02-13T20:25:37.305222460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:25:37.314732 containerd[1529]: time="2025-02-13T20:25:37.313792680Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:37.324999 containerd[1529]: time="2025-02-13T20:25:37.324953858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:37.325637 containerd[1529]: time="2025-02-13T20:25:37.325416111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.251709069s" Feb 13 20:25:37.325637 containerd[1529]: time="2025-02-13T20:25:37.325435729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:25:37.326752 containerd[1529]: time="2025-02-13T20:25:37.326569766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:25:37.334015 containerd[1529]: time="2025-02-13T20:25:37.333880626Z" level=info msg="CreateContainer within sandbox \"e8493478a628dfa209835db1c658346ef96113737542f14d019f3544af74f434\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:25:37.367386 containerd[1529]: time="2025-02-13T20:25:37.367349936Z" level=info msg="CreateContainer within sandbox \"e8493478a628dfa209835db1c658346ef96113737542f14d019f3544af74f434\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"492f36e33f1d40c78c99c6ffd817ff2a35b324ae9401efe9b1d5b3e3078f93cc\"" Feb 13 20:25:37.367976 containerd[1529]: time="2025-02-13T20:25:37.367880219Z" level=info msg="StartContainer for \"492f36e33f1d40c78c99c6ffd817ff2a35b324ae9401efe9b1d5b3e3078f93cc\"" Feb 13 20:25:37.404417 kubelet[2732]: E0213 20:25:37.404364 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.404417 kubelet[2732]: W0213 20:25:37.404386 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.404417 kubelet[2732]: E0213 20:25:37.404406 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.404761 kubelet[2732]: E0213 20:25:37.404552 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.404761 kubelet[2732]: W0213 20:25:37.404558 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.404761 kubelet[2732]: E0213 20:25:37.404566 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.404761 kubelet[2732]: E0213 20:25:37.404710 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.404761 kubelet[2732]: W0213 20:25:37.404716 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.404761 kubelet[2732]: E0213 20:25:37.404722 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.404933 kubelet[2732]: E0213 20:25:37.404831 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.404933 kubelet[2732]: W0213 20:25:37.404836 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.404933 kubelet[2732]: E0213 20:25:37.404842 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.405023 kubelet[2732]: E0213 20:25:37.404964 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.405023 kubelet[2732]: W0213 20:25:37.404969 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.405023 kubelet[2732]: E0213 20:25:37.404975 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.406075 kubelet[2732]: E0213 20:25:37.405092 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.406075 kubelet[2732]: W0213 20:25:37.405101 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.406075 kubelet[2732]: E0213 20:25:37.405108 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.406075 kubelet[2732]: E0213 20:25:37.405228 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.406075 kubelet[2732]: W0213 20:25:37.405233 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.406075 kubelet[2732]: E0213 20:25:37.405253 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.406075 kubelet[2732]: E0213 20:25:37.405459 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.406075 kubelet[2732]: W0213 20:25:37.405464 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.406075 kubelet[2732]: E0213 20:25:37.405470 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.406075 kubelet[2732]: E0213 20:25:37.405733 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.406346 kubelet[2732]: W0213 20:25:37.405737 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.406346 kubelet[2732]: E0213 20:25:37.405743 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.406346 kubelet[2732]: E0213 20:25:37.406107 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.406346 kubelet[2732]: W0213 20:25:37.406114 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.406346 kubelet[2732]: E0213 20:25:37.406120 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.406434 kubelet[2732]: E0213 20:25:37.406403 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.406434 kubelet[2732]: W0213 20:25:37.406410 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.406434 kubelet[2732]: E0213 20:25:37.406416 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.407962 kubelet[2732]: E0213 20:25:37.406601 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.407962 kubelet[2732]: W0213 20:25:37.406609 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.407962 kubelet[2732]: E0213 20:25:37.406708 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.407962 kubelet[2732]: E0213 20:25:37.406910 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.407962 kubelet[2732]: W0213 20:25:37.406915 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.407962 kubelet[2732]: E0213 20:25:37.407014 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.407962 kubelet[2732]: E0213 20:25:37.407205 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.407962 kubelet[2732]: W0213 20:25:37.407212 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.407962 kubelet[2732]: E0213 20:25:37.407220 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.407962 kubelet[2732]: E0213 20:25:37.407499 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.408297 kubelet[2732]: W0213 20:25:37.407506 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.408297 kubelet[2732]: E0213 20:25:37.407523 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.423843 systemd[1]: Started cri-containerd-492f36e33f1d40c78c99c6ffd817ff2a35b324ae9401efe9b1d5b3e3078f93cc.scope - libcontainer container 492f36e33f1d40c78c99c6ffd817ff2a35b324ae9401efe9b1d5b3e3078f93cc. Feb 13 20:25:37.471660 containerd[1529]: time="2025-02-13T20:25:37.471627254Z" level=info msg="StartContainer for \"492f36e33f1d40c78c99c6ffd817ff2a35b324ae9401efe9b1d5b3e3078f93cc\" returns successfully" Feb 13 20:25:37.799432 kubelet[2732]: E0213 20:25:37.799374 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:37.828136 kubelet[2732]: E0213 20:25:37.828029 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.828136 kubelet[2732]: W0213 20:25:37.828048 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.828136 kubelet[2732]: E0213 20:25:37.828060 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.829081 kubelet[2732]: E0213 20:25:37.828581 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.829081 kubelet[2732]: W0213 20:25:37.828591 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.829081 kubelet[2732]: E0213 20:25:37.828602 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.829348 kubelet[2732]: E0213 20:25:37.829234 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.829348 kubelet[2732]: W0213 20:25:37.829244 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.829348 kubelet[2732]: E0213 20:25:37.829255 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.830099 kubelet[2732]: E0213 20:25:37.829734 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.830099 kubelet[2732]: W0213 20:25:37.829742 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.830099 kubelet[2732]: E0213 20:25:37.829752 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.830513 kubelet[2732]: E0213 20:25:37.830374 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.830513 kubelet[2732]: W0213 20:25:37.830384 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.830513 kubelet[2732]: E0213 20:25:37.830393 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.830808 kubelet[2732]: E0213 20:25:37.830734 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.830808 kubelet[2732]: W0213 20:25:37.830743 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.830808 kubelet[2732]: E0213 20:25:37.830752 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.839340 kubelet[2732]: E0213 20:25:37.830900 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.839340 kubelet[2732]: W0213 20:25:37.830907 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.839340 kubelet[2732]: E0213 20:25:37.830914 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.839340 kubelet[2732]: E0213 20:25:37.831029 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.839340 kubelet[2732]: W0213 20:25:37.831035 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.839340 kubelet[2732]: E0213 20:25:37.831043 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.839340 kubelet[2732]: E0213 20:25:37.831149 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.839340 kubelet[2732]: W0213 20:25:37.831155 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.839340 kubelet[2732]: E0213 20:25:37.831164 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.839340 kubelet[2732]: E0213 20:25:37.831274 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.839552 kubelet[2732]: W0213 20:25:37.831280 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.839552 kubelet[2732]: E0213 20:25:37.831287 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.839552 kubelet[2732]: E0213 20:25:37.831388 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.839552 kubelet[2732]: W0213 20:25:37.831394 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.839552 kubelet[2732]: E0213 20:25:37.831401 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.839552 kubelet[2732]: E0213 20:25:37.831502 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.839552 kubelet[2732]: W0213 20:25:37.831506 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.839552 kubelet[2732]: E0213 20:25:37.831511 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.839552 kubelet[2732]: E0213 20:25:37.831732 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.839552 kubelet[2732]: W0213 20:25:37.831740 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.839776 kubelet[2732]: E0213 20:25:37.831753 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.839776 kubelet[2732]: E0213 20:25:37.831916 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.839776 kubelet[2732]: W0213 20:25:37.831923 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.839776 kubelet[2732]: E0213 20:25:37.831930 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.839776 kubelet[2732]: E0213 20:25:37.832053 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.839776 kubelet[2732]: W0213 20:25:37.832059 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.839776 kubelet[2732]: E0213 20:25:37.832067 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.841097 kubelet[2732]: E0213 20:25:37.841009 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.841097 kubelet[2732]: W0213 20:25:37.841026 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.841097 kubelet[2732]: E0213 20:25:37.841039 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.841097 kubelet[2732]: I0213 20:25:37.841031 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5db5b7cd5-bq9q7" podStartSLOduration=1.574279746 podStartE2EDuration="4.827483335s" podCreationTimestamp="2025-02-13 20:25:33 +0000 UTC" firstStartedPulling="2025-02-13 20:25:34.073297629 +0000 UTC m=+11.385420692" lastFinishedPulling="2025-02-13 20:25:37.326501217 +0000 UTC m=+14.638624281" observedRunningTime="2025-02-13 20:25:37.827291381 +0000 UTC m=+15.139414454" watchObservedRunningTime="2025-02-13 20:25:37.827483335 +0000 UTC m=+15.139606407" Feb 13 20:25:37.850016 kubelet[2732]: E0213 20:25:37.841328 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.850016 kubelet[2732]: W0213 20:25:37.841333 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.850016 kubelet[2732]: E0213 20:25:37.841340 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.850016 kubelet[2732]: E0213 20:25:37.841452 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.850016 kubelet[2732]: W0213 20:25:37.841458 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.850016 kubelet[2732]: E0213 20:25:37.841466 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.850016 kubelet[2732]: E0213 20:25:37.841602 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.850016 kubelet[2732]: W0213 20:25:37.841609 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.850016 kubelet[2732]: E0213 20:25:37.841631 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.850016 kubelet[2732]: E0213 20:25:37.841728 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.850229 kubelet[2732]: W0213 20:25:37.841733 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.850229 kubelet[2732]: E0213 20:25:37.841742 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.850229 kubelet[2732]: E0213 20:25:37.841832 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.850229 kubelet[2732]: W0213 20:25:37.841838 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.850229 kubelet[2732]: E0213 20:25:37.841845 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.850229 kubelet[2732]: E0213 20:25:37.841947 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.850229 kubelet[2732]: W0213 20:25:37.841952 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.850229 kubelet[2732]: E0213 20:25:37.841957 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.850229 kubelet[2732]: E0213 20:25:37.842160 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.850229 kubelet[2732]: W0213 20:25:37.842166 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865287 kubelet[2732]: E0213 20:25:37.842174 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865287 kubelet[2732]: E0213 20:25:37.842283 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865287 kubelet[2732]: W0213 20:25:37.842288 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865287 kubelet[2732]: E0213 20:25:37.842293 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865287 kubelet[2732]: E0213 20:25:37.842389 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865287 kubelet[2732]: W0213 20:25:37.842394 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865287 kubelet[2732]: E0213 20:25:37.842399 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865287 kubelet[2732]: E0213 20:25:37.842497 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865287 kubelet[2732]: W0213 20:25:37.842505 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865287 kubelet[2732]: E0213 20:25:37.842512 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865472 kubelet[2732]: E0213 20:25:37.842645 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865472 kubelet[2732]: W0213 20:25:37.842650 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865472 kubelet[2732]: E0213 20:25:37.842656 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865472 kubelet[2732]: E0213 20:25:37.842855 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865472 kubelet[2732]: W0213 20:25:37.842863 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865472 kubelet[2732]: E0213 20:25:37.842870 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865472 kubelet[2732]: E0213 20:25:37.842993 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865472 kubelet[2732]: W0213 20:25:37.842998 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865472 kubelet[2732]: E0213 20:25:37.843003 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865472 kubelet[2732]: E0213 20:25:37.843100 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865679 kubelet[2732]: W0213 20:25:37.843105 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865679 kubelet[2732]: E0213 20:25:37.843110 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865679 kubelet[2732]: E0213 20:25:37.843211 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865679 kubelet[2732]: W0213 20:25:37.843219 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865679 kubelet[2732]: E0213 20:25:37.843227 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865679 kubelet[2732]: E0213 20:25:37.843342 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865679 kubelet[2732]: W0213 20:25:37.843348 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865679 kubelet[2732]: E0213 20:25:37.843354 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:37.865679 kubelet[2732]: E0213 20:25:37.843654 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:37.865679 kubelet[2732]: W0213 20:25:37.843660 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:37.865856 kubelet[2732]: E0213 20:25:37.843665 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.840330 kubelet[2732]: E0213 20:25:38.840256 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.840330 kubelet[2732]: W0213 20:25:38.840272 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.840330 kubelet[2732]: E0213 20:25:38.840285 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.840774 kubelet[2732]: E0213 20:25:38.840375 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.840774 kubelet[2732]: W0213 20:25:38.840379 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.840774 kubelet[2732]: E0213 20:25:38.840385 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.840774 kubelet[2732]: E0213 20:25:38.840463 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.840774 kubelet[2732]: W0213 20:25:38.840467 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.840774 kubelet[2732]: E0213 20:25:38.840471 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.840774 kubelet[2732]: E0213 20:25:38.840550 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.840774 kubelet[2732]: W0213 20:25:38.840555 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.840774 kubelet[2732]: E0213 20:25:38.840559 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.840774 kubelet[2732]: E0213 20:25:38.840648 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.841085 kubelet[2732]: W0213 20:25:38.840653 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.841085 kubelet[2732]: E0213 20:25:38.840658 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.841085 kubelet[2732]: E0213 20:25:38.840732 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.841085 kubelet[2732]: W0213 20:25:38.840736 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.841085 kubelet[2732]: E0213 20:25:38.840740 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.841085 kubelet[2732]: E0213 20:25:38.840817 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.841085 kubelet[2732]: W0213 20:25:38.840821 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.841085 kubelet[2732]: E0213 20:25:38.840826 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.841085 kubelet[2732]: E0213 20:25:38.840904 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.841085 kubelet[2732]: W0213 20:25:38.840909 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.841320 kubelet[2732]: E0213 20:25:38.840913 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.841320 kubelet[2732]: E0213 20:25:38.840998 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.841320 kubelet[2732]: W0213 20:25:38.841002 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.841320 kubelet[2732]: E0213 20:25:38.841007 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.841320 kubelet[2732]: E0213 20:25:38.841078 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.841320 kubelet[2732]: W0213 20:25:38.841084 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.841320 kubelet[2732]: E0213 20:25:38.841089 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.841320 kubelet[2732]: E0213 20:25:38.841165 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.841320 kubelet[2732]: W0213 20:25:38.841169 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.841320 kubelet[2732]: E0213 20:25:38.841173 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.852933 kubelet[2732]: E0213 20:25:38.841247 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.852933 kubelet[2732]: W0213 20:25:38.841251 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.852933 kubelet[2732]: E0213 20:25:38.841255 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.852933 kubelet[2732]: E0213 20:25:38.841334 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.852933 kubelet[2732]: W0213 20:25:38.841338 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.852933 kubelet[2732]: E0213 20:25:38.841343 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.852933 kubelet[2732]: E0213 20:25:38.841417 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.852933 kubelet[2732]: W0213 20:25:38.841421 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.852933 kubelet[2732]: E0213 20:25:38.841425 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.852933 kubelet[2732]: E0213 20:25:38.841502 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853108 kubelet[2732]: W0213 20:25:38.841506 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853108 kubelet[2732]: E0213 20:25:38.841511 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853108 kubelet[2732]: E0213 20:25:38.848784 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853108 kubelet[2732]: W0213 20:25:38.848791 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853108 kubelet[2732]: E0213 20:25:38.848798 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853108 kubelet[2732]: E0213 20:25:38.848925 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853108 kubelet[2732]: W0213 20:25:38.848930 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853108 kubelet[2732]: E0213 20:25:38.848935 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853108 kubelet[2732]: E0213 20:25:38.849032 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853108 kubelet[2732]: W0213 20:25:38.849038 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853273 kubelet[2732]: E0213 20:25:38.849117 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853273 kubelet[2732]: E0213 20:25:38.849169 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853273 kubelet[2732]: W0213 20:25:38.849174 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853273 kubelet[2732]: E0213 20:25:38.849184 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853273 kubelet[2732]: E0213 20:25:38.849392 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853273 kubelet[2732]: W0213 20:25:38.849399 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853273 kubelet[2732]: E0213 20:25:38.849409 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853273 kubelet[2732]: E0213 20:25:38.849520 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853273 kubelet[2732]: W0213 20:25:38.849525 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853273 kubelet[2732]: E0213 20:25:38.849538 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853430 kubelet[2732]: E0213 20:25:38.849641 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853430 kubelet[2732]: W0213 20:25:38.849646 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853430 kubelet[2732]: E0213 20:25:38.849655 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853430 kubelet[2732]: E0213 20:25:38.849757 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853430 kubelet[2732]: W0213 20:25:38.849762 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853430 kubelet[2732]: E0213 20:25:38.849771 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853430 kubelet[2732]: E0213 20:25:38.849875 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853430 kubelet[2732]: W0213 20:25:38.849881 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853430 kubelet[2732]: E0213 20:25:38.849887 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853430 kubelet[2732]: E0213 20:25:38.849993 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853608 kubelet[2732]: W0213 20:25:38.850004 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853608 kubelet[2732]: E0213 20:25:38.850017 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853608 kubelet[2732]: E0213 20:25:38.850111 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853608 kubelet[2732]: W0213 20:25:38.850116 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853608 kubelet[2732]: E0213 20:25:38.850124 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853608 kubelet[2732]: E0213 20:25:38.850233 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853608 kubelet[2732]: W0213 20:25:38.850237 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853608 kubelet[2732]: E0213 20:25:38.850248 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853608 kubelet[2732]: E0213 20:25:38.850455 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853608 kubelet[2732]: W0213 20:25:38.850460 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853791 kubelet[2732]: E0213 20:25:38.850468 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853791 kubelet[2732]: E0213 20:25:38.850575 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853791 kubelet[2732]: W0213 20:25:38.850580 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853791 kubelet[2732]: E0213 20:25:38.850596 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853791 kubelet[2732]: E0213 20:25:38.850722 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853791 kubelet[2732]: W0213 20:25:38.850727 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853791 kubelet[2732]: E0213 20:25:38.850735 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853791 kubelet[2732]: E0213 20:25:38.850848 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853791 kubelet[2732]: W0213 20:25:38.850853 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853791 kubelet[2732]: E0213 20:25:38.850863 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853948 kubelet[2732]: E0213 20:25:38.851010 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853948 kubelet[2732]: W0213 20:25:38.851014 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853948 kubelet[2732]: E0213 20:25:38.851023 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:38.853948 kubelet[2732]: E0213 20:25:38.851123 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:25:38.853948 kubelet[2732]: W0213 20:25:38.851128 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:25:38.853948 kubelet[2732]: E0213 20:25:38.851134 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:25:39.430800 containerd[1529]: time="2025-02-13T20:25:39.430688031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:39.448463 containerd[1529]: time="2025-02-13T20:25:39.448414817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:25:39.453654 containerd[1529]: time="2025-02-13T20:25:39.453574142Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:39.475042 containerd[1529]: time="2025-02-13T20:25:39.474998804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:39.475929 containerd[1529]: time="2025-02-13T20:25:39.475872679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.149282092s" Feb 13 20:25:39.475929 containerd[1529]: time="2025-02-13T20:25:39.475902809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:25:39.477740 containerd[1529]: time="2025-02-13T20:25:39.477663899Z" level=info msg="CreateContainer within sandbox \"2dd46dabd6541d5c441aaf1bc7f6234431dc5dcc64836adc8fb7463ce036f1ad\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:25:39.679694 containerd[1529]: time="2025-02-13T20:25:39.679666925Z" level=info msg="CreateContainer within sandbox \"2dd46dabd6541d5c441aaf1bc7f6234431dc5dcc64836adc8fb7463ce036f1ad\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b70a4d21fd416cae279587426b5edab22e77e168f22b84e6a9f02734d948c811\"" Feb 13 20:25:39.681416 containerd[1529]: time="2025-02-13T20:25:39.680226506Z" level=info msg="StartContainer for \"b70a4d21fd416cae279587426b5edab22e77e168f22b84e6a9f02734d948c811\"" Feb 13 20:25:39.703773 systemd[1]: Started cri-containerd-b70a4d21fd416cae279587426b5edab22e77e168f22b84e6a9f02734d948c811.scope - libcontainer container b70a4d21fd416cae279587426b5edab22e77e168f22b84e6a9f02734d948c811. Feb 13 20:25:39.730447 containerd[1529]: time="2025-02-13T20:25:39.730250155Z" level=info msg="StartContainer for \"b70a4d21fd416cae279587426b5edab22e77e168f22b84e6a9f02734d948c811\" returns successfully" Feb 13 20:25:39.731030 systemd[1]: cri-containerd-b70a4d21fd416cae279587426b5edab22e77e168f22b84e6a9f02734d948c811.scope: Deactivated successfully. Feb 13 20:25:39.748911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b70a4d21fd416cae279587426b5edab22e77e168f22b84e6a9f02734d948c811-rootfs.mount: Deactivated successfully. Feb 13 20:25:39.763038 kubelet[2732]: E0213 20:25:39.762979 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:39.871008 containerd[1529]: time="2025-02-13T20:25:39.831354639Z" level=info msg="shim disconnected" id=b70a4d21fd416cae279587426b5edab22e77e168f22b84e6a9f02734d948c811 namespace=k8s.io Feb 13 20:25:39.871008 containerd[1529]: time="2025-02-13T20:25:39.870873948Z" level=warning msg="cleaning up after shim disconnected" id=b70a4d21fd416cae279587426b5edab22e77e168f22b84e6a9f02734d948c811 namespace=k8s.io Feb 13 20:25:39.871008 containerd[1529]: time="2025-02-13T20:25:39.870884599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:25:40.848412 containerd[1529]: time="2025-02-13T20:25:40.848284242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:25:41.762892 kubelet[2732]: E0213 20:25:41.762829 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:43.785830 kubelet[2732]: E0213 20:25:43.785765 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:45.120597 containerd[1529]: time="2025-02-13T20:25:45.120544791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:45.129604 containerd[1529]: time="2025-02-13T20:25:45.129557378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:25:45.154526 containerd[1529]: time="2025-02-13T20:25:45.154479502Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:45.173828 containerd[1529]: time="2025-02-13T20:25:45.173783308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:45.187609 containerd[1529]: time="2025-02-13T20:25:45.174265117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.325955503s" Feb 13 20:25:45.187609 containerd[1529]: time="2025-02-13T20:25:45.174287699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:25:45.197918 containerd[1529]: time="2025-02-13T20:25:45.197891766Z" level=info msg="CreateContainer within sandbox \"2dd46dabd6541d5c441aaf1bc7f6234431dc5dcc64836adc8fb7463ce036f1ad\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:25:45.423867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573092600.mount: Deactivated successfully. Feb 13 20:25:45.471114 containerd[1529]: time="2025-02-13T20:25:45.471031707Z" level=info msg="CreateContainer within sandbox \"2dd46dabd6541d5c441aaf1bc7f6234431dc5dcc64836adc8fb7463ce036f1ad\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"85addb48d10f6c77945fc22b35ab24f78145dd6a9342cdd1877ae92e03ac092d\"" Feb 13 20:25:45.471658 containerd[1529]: time="2025-02-13T20:25:45.471465432Z" level=info msg="StartContainer for \"85addb48d10f6c77945fc22b35ab24f78145dd6a9342cdd1877ae92e03ac092d\"" Feb 13 20:25:45.589771 systemd[1]: Started cri-containerd-85addb48d10f6c77945fc22b35ab24f78145dd6a9342cdd1877ae92e03ac092d.scope - libcontainer container 85addb48d10f6c77945fc22b35ab24f78145dd6a9342cdd1877ae92e03ac092d. Feb 13 20:25:45.638328 containerd[1529]: time="2025-02-13T20:25:45.638291688Z" level=info msg="StartContainer for \"85addb48d10f6c77945fc22b35ab24f78145dd6a9342cdd1877ae92e03ac092d\" returns successfully" Feb 13 20:25:45.776272 kubelet[2732]: E0213 20:25:45.776187 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:47.763201 kubelet[2732]: E0213 20:25:47.763115 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:48.023978 systemd[1]: cri-containerd-85addb48d10f6c77945fc22b35ab24f78145dd6a9342cdd1877ae92e03ac092d.scope: Deactivated successfully. Feb 13 20:25:48.077691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85addb48d10f6c77945fc22b35ab24f78145dd6a9342cdd1877ae92e03ac092d-rootfs.mount: Deactivated successfully. Feb 13 20:25:48.121266 containerd[1529]: time="2025-02-13T20:25:48.121221648Z" level=info msg="shim disconnected" id=85addb48d10f6c77945fc22b35ab24f78145dd6a9342cdd1877ae92e03ac092d namespace=k8s.io Feb 13 20:25:48.121266 containerd[1529]: time="2025-02-13T20:25:48.121257680Z" level=warning msg="cleaning up after shim disconnected" id=85addb48d10f6c77945fc22b35ab24f78145dd6a9342cdd1877ae92e03ac092d namespace=k8s.io Feb 13 20:25:48.121266 containerd[1529]: time="2025-02-13T20:25:48.121263330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:25:48.162232 kubelet[2732]: I0213 20:25:48.161365 2732 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 20:25:48.192948 systemd[1]: Created slice kubepods-besteffort-pod2e92037f_96bf_409f_a057_890289c6e006.slice - libcontainer container kubepods-besteffort-pod2e92037f_96bf_409f_a057_890289c6e006.slice. Feb 13 20:25:48.198513 systemd[1]: Created slice kubepods-burstable-pode18627c8_7f32_45f6_9afb_f589410799df.slice - libcontainer container kubepods-burstable-pode18627c8_7f32_45f6_9afb_f589410799df.slice. Feb 13 20:25:48.203766 systemd[1]: Created slice kubepods-besteffort-pod348324a7_a185_4686_9c4e_5b8f58c85d3b.slice - libcontainer container kubepods-besteffort-pod348324a7_a185_4686_9c4e_5b8f58c85d3b.slice. Feb 13 20:25:48.212749 systemd[1]: Created slice kubepods-burstable-podf7d7c602_e34f_4d46_9a2c_8104daaf8055.slice - libcontainer container kubepods-burstable-podf7d7c602_e34f_4d46_9a2c_8104daaf8055.slice. Feb 13 20:25:48.218017 systemd[1]: Created slice kubepods-besteffort-podf69e3431_9b6e_466d_b7b7_3df2f65cb067.slice - libcontainer container kubepods-besteffort-podf69e3431_9b6e_466d_b7b7_3df2f65cb067.slice. Feb 13 20:25:48.235846 kubelet[2732]: I0213 20:25:48.235787 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e92037f-96bf-409f-a057-890289c6e006-calico-apiserver-certs\") pod \"calico-apiserver-5fdd49f766-hwtb2\" (UID: \"2e92037f-96bf-409f-a057-890289c6e006\") " pod="calico-apiserver/calico-apiserver-5fdd49f766-hwtb2" Feb 13 20:25:48.235846 kubelet[2732]: I0213 20:25:48.235813 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clj9s\" (UniqueName: \"kubernetes.io/projected/2e92037f-96bf-409f-a057-890289c6e006-kube-api-access-clj9s\") pod \"calico-apiserver-5fdd49f766-hwtb2\" (UID: \"2e92037f-96bf-409f-a057-890289c6e006\") " pod="calico-apiserver/calico-apiserver-5fdd49f766-hwtb2" Feb 13 20:25:48.321697 containerd[1529]: time="2025-02-13T20:25:48.321427254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:25:48.336344 kubelet[2732]: I0213 20:25:48.336311 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18627c8-7f32-45f6-9afb-f589410799df-config-volume\") pod \"coredns-6f6b679f8f-tjrfw\" (UID: \"e18627c8-7f32-45f6-9afb-f589410799df\") " pod="kube-system/coredns-6f6b679f8f-tjrfw" Feb 13 20:25:48.336344 kubelet[2732]: I0213 20:25:48.336348 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7d7c602-e34f-4d46-9a2c-8104daaf8055-config-volume\") pod \"coredns-6f6b679f8f-j7gxr\" (UID: \"f7d7c602-e34f-4d46-9a2c-8104daaf8055\") " pod="kube-system/coredns-6f6b679f8f-j7gxr" Feb 13 20:25:48.336481 kubelet[2732]: I0213 20:25:48.336370 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbrt\" (UniqueName: \"kubernetes.io/projected/348324a7-a185-4686-9c4e-5b8f58c85d3b-kube-api-access-5dbrt\") pod \"calico-kube-controllers-67f9bb95f-nhppb\" (UID: \"348324a7-a185-4686-9c4e-5b8f58c85d3b\") " pod="calico-system/calico-kube-controllers-67f9bb95f-nhppb" Feb 13 20:25:48.336481 kubelet[2732]: I0213 20:25:48.336382 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52z26\" (UniqueName: \"kubernetes.io/projected/e18627c8-7f32-45f6-9afb-f589410799df-kube-api-access-52z26\") pod \"coredns-6f6b679f8f-tjrfw\" (UID: \"e18627c8-7f32-45f6-9afb-f589410799df\") " pod="kube-system/coredns-6f6b679f8f-tjrfw" Feb 13 20:25:48.336481 kubelet[2732]: I0213 20:25:48.336397 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/348324a7-a185-4686-9c4e-5b8f58c85d3b-tigera-ca-bundle\") pod \"calico-kube-controllers-67f9bb95f-nhppb\" (UID: \"348324a7-a185-4686-9c4e-5b8f58c85d3b\") " pod="calico-system/calico-kube-controllers-67f9bb95f-nhppb" Feb 13 20:25:48.336481 kubelet[2732]: I0213 20:25:48.336409 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbx2\" (UniqueName: \"kubernetes.io/projected/f7d7c602-e34f-4d46-9a2c-8104daaf8055-kube-api-access-kxbx2\") pod \"coredns-6f6b679f8f-j7gxr\" (UID: \"f7d7c602-e34f-4d46-9a2c-8104daaf8055\") " pod="kube-system/coredns-6f6b679f8f-j7gxr" Feb 13 20:25:48.336481 kubelet[2732]: I0213 20:25:48.336420 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f69e3431-9b6e-466d-b7b7-3df2f65cb067-calico-apiserver-certs\") pod \"calico-apiserver-5fdd49f766-8k5nw\" (UID: \"f69e3431-9b6e-466d-b7b7-3df2f65cb067\") " pod="calico-apiserver/calico-apiserver-5fdd49f766-8k5nw" Feb 13 20:25:48.336581 kubelet[2732]: I0213 20:25:48.336430 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtmx\" (UniqueName: \"kubernetes.io/projected/f69e3431-9b6e-466d-b7b7-3df2f65cb067-kube-api-access-vrtmx\") pod \"calico-apiserver-5fdd49f766-8k5nw\" (UID: \"f69e3431-9b6e-466d-b7b7-3df2f65cb067\") " pod="calico-apiserver/calico-apiserver-5fdd49f766-8k5nw" Feb 13 20:25:48.497162 containerd[1529]: time="2025-02-13T20:25:48.497090444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fdd49f766-hwtb2,Uid:2e92037f-96bf-409f-a057-890289c6e006,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:25:48.501807 containerd[1529]: time="2025-02-13T20:25:48.501666408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tjrfw,Uid:e18627c8-7f32-45f6-9afb-f589410799df,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:48.507346 containerd[1529]: time="2025-02-13T20:25:48.507227224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f9bb95f-nhppb,Uid:348324a7-a185-4686-9c4e-5b8f58c85d3b,Namespace:calico-system,Attempt:0,}" Feb 13 20:25:48.523650 containerd[1529]: time="2025-02-13T20:25:48.523613485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fdd49f766-8k5nw,Uid:f69e3431-9b6e-466d-b7b7-3df2f65cb067,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:25:48.542034 containerd[1529]: time="2025-02-13T20:25:48.541992237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j7gxr,Uid:f7d7c602-e34f-4d46-9a2c-8104daaf8055,Namespace:kube-system,Attempt:0,}" Feb 13 20:25:49.439151 containerd[1529]: time="2025-02-13T20:25:49.437863634Z" level=error msg="Failed to destroy network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.440174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492-shm.mount: Deactivated successfully. Feb 13 20:25:49.443223 containerd[1529]: time="2025-02-13T20:25:49.443191045Z" level=error msg="encountered an error cleaning up failed sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.443333 containerd[1529]: time="2025-02-13T20:25:49.443318260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j7gxr,Uid:f7d7c602-e34f-4d46-9a2c-8104daaf8055,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.449953 containerd[1529]: time="2025-02-13T20:25:49.449917707Z" level=error msg="Failed to destroy network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.451857 containerd[1529]: time="2025-02-13T20:25:49.451831009Z" level=error msg="encountered an error cleaning up failed sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.452012 containerd[1529]: time="2025-02-13T20:25:49.451962233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fdd49f766-8k5nw,Uid:f69e3431-9b6e-466d-b7b7-3df2f65cb067,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.453272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5-shm.mount: Deactivated successfully. Feb 13 20:25:49.454410 kubelet[2732]: E0213 20:25:49.453481 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.454410 kubelet[2732]: E0213 20:25:49.453539 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fdd49f766-8k5nw" Feb 13 20:25:49.454410 kubelet[2732]: E0213 20:25:49.453556 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fdd49f766-8k5nw" Feb 13 20:25:49.455861 kubelet[2732]: E0213 20:25:49.453588 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fdd49f766-8k5nw_calico-apiserver(f69e3431-9b6e-466d-b7b7-3df2f65cb067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fdd49f766-8k5nw_calico-apiserver(f69e3431-9b6e-466d-b7b7-3df2f65cb067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fdd49f766-8k5nw" podUID="f69e3431-9b6e-466d-b7b7-3df2f65cb067" Feb 13 20:25:49.455861 kubelet[2732]: E0213 20:25:49.455369 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.455861 kubelet[2732]: E0213 20:25:49.455411 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-j7gxr" Feb 13 20:25:49.455964 kubelet[2732]: E0213 20:25:49.455425 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-j7gxr" Feb 13 20:25:49.455964 kubelet[2732]: E0213 20:25:49.455450 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-j7gxr_kube-system(f7d7c602-e34f-4d46-9a2c-8104daaf8055)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-j7gxr_kube-system(f7d7c602-e34f-4d46-9a2c-8104daaf8055)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-j7gxr" podUID="f7d7c602-e34f-4d46-9a2c-8104daaf8055" Feb 13 20:25:49.458162 containerd[1529]: time="2025-02-13T20:25:49.458136560Z" level=error msg="Failed to destroy network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.460450 containerd[1529]: time="2025-02-13T20:25:49.458211659Z" level=error msg="Failed to destroy network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.460450 containerd[1529]: time="2025-02-13T20:25:49.458782079Z" level=error msg="encountered an error cleaning up failed sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.460920 containerd[1529]: time="2025-02-13T20:25:49.460885155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tjrfw,Uid:e18627c8-7f32-45f6-9afb-f589410799df,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.461049 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977-shm.mount: Deactivated successfully. Feb 13 20:25:49.461127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c-shm.mount: Deactivated successfully. Feb 13 20:25:49.462916 kubelet[2732]: E0213 20:25:49.462527 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.462916 kubelet[2732]: E0213 20:25:49.462532 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.462916 kubelet[2732]: E0213 20:25:49.462649 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-tjrfw" Feb 13 20:25:49.462916 kubelet[2732]: E0213 20:25:49.462665 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-tjrfw" Feb 13 20:25:49.465431 containerd[1529]: time="2025-02-13T20:25:49.458227376Z" level=error msg="Failed to destroy network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.465431 containerd[1529]: time="2025-02-13T20:25:49.461801633Z" level=error msg="encountered an error cleaning up failed sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.465431 containerd[1529]: time="2025-02-13T20:25:49.461931515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fdd49f766-hwtb2,Uid:2e92037f-96bf-409f-a057-890289c6e006,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.465431 containerd[1529]: time="2025-02-13T20:25:49.464540239Z" level=error msg="encountered an error cleaning up failed sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.465431 containerd[1529]: time="2025-02-13T20:25:49.464588382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f9bb95f-nhppb,Uid:348324a7-a185-4686-9c4e-5b8f58c85d3b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.465145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5-shm.mount: Deactivated successfully. Feb 13 20:25:49.465844 kubelet[2732]: E0213 20:25:49.462707 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-tjrfw_kube-system(e18627c8-7f32-45f6-9afb-f589410799df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-tjrfw_kube-system(e18627c8-7f32-45f6-9afb-f589410799df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-tjrfw" podUID="e18627c8-7f32-45f6-9afb-f589410799df" Feb 13 20:25:49.465844 kubelet[2732]: E0213 20:25:49.462564 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fdd49f766-hwtb2" Feb 13 20:25:49.465844 kubelet[2732]: E0213 20:25:49.463444 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fdd49f766-hwtb2" Feb 13 20:25:49.465923 kubelet[2732]: E0213 20:25:49.463469 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fdd49f766-hwtb2_calico-apiserver(2e92037f-96bf-409f-a057-890289c6e006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fdd49f766-hwtb2_calico-apiserver(2e92037f-96bf-409f-a057-890289c6e006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fdd49f766-hwtb2" podUID="2e92037f-96bf-409f-a057-890289c6e006" Feb 13 20:25:49.465923 kubelet[2732]: E0213 20:25:49.465382 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.465923 kubelet[2732]: E0213 20:25:49.465434 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f9bb95f-nhppb" Feb 13 20:25:49.465991 kubelet[2732]: E0213 20:25:49.465452 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f9bb95f-nhppb" Feb 13 20:25:49.465991 kubelet[2732]: E0213 20:25:49.465504 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67f9bb95f-nhppb_calico-system(348324a7-a185-4686-9c4e-5b8f58c85d3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67f9bb95f-nhppb_calico-system(348324a7-a185-4686-9c4e-5b8f58c85d3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f9bb95f-nhppb" podUID="348324a7-a185-4686-9c4e-5b8f58c85d3b" Feb 13 20:25:49.766673 systemd[1]: Created slice kubepods-besteffort-podb708f66b_cea3_4952_b290_318b6ed2fb1d.slice - libcontainer container kubepods-besteffort-podb708f66b_cea3_4952_b290_318b6ed2fb1d.slice. Feb 13 20:25:49.768899 containerd[1529]: time="2025-02-13T20:25:49.768755167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cntw8,Uid:b708f66b-cea3-4952-b290-318b6ed2fb1d,Namespace:calico-system,Attempt:0,}" Feb 13 20:25:49.823536 containerd[1529]: time="2025-02-13T20:25:49.823495458Z" level=error msg="Failed to destroy network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.823795 containerd[1529]: time="2025-02-13T20:25:49.823776131Z" level=error msg="encountered an error cleaning up failed sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.824188 containerd[1529]: time="2025-02-13T20:25:49.823816073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cntw8,Uid:b708f66b-cea3-4952-b290-318b6ed2fb1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.824225 kubelet[2732]: E0213 20:25:49.823944 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:49.824225 kubelet[2732]: E0213 20:25:49.823980 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cntw8" Feb 13 20:25:49.824225 kubelet[2732]: E0213 20:25:49.823995 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cntw8" Feb 13 20:25:49.824340 kubelet[2732]: E0213 20:25:49.824021 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cntw8_calico-system(b708f66b-cea3-4952-b290-318b6ed2fb1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cntw8_calico-system(b708f66b-cea3-4952-b290-318b6ed2fb1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:50.323445 kubelet[2732]: I0213 20:25:50.323134 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:25:50.324146 kubelet[2732]: I0213 20:25:50.324016 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:25:50.477683 kubelet[2732]: I0213 20:25:50.477659 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:25:50.478432 kubelet[2732]: I0213 20:25:50.478418 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:25:50.479051 kubelet[2732]: I0213 20:25:50.479039 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:25:50.479649 kubelet[2732]: I0213 20:25:50.479597 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:25:50.768479 containerd[1529]: time="2025-02-13T20:25:50.767971488Z" level=info msg="StopPodSandbox for \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\"" Feb 13 20:25:50.768741 containerd[1529]: time="2025-02-13T20:25:50.768728068Z" level=info msg="StopPodSandbox for \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\"" Feb 13 20:25:50.769033 containerd[1529]: time="2025-02-13T20:25:50.769017616Z" level=info msg="Ensure that sandbox 344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977 in task-service has been cleanup successfully" Feb 13 20:25:50.769096 containerd[1529]: time="2025-02-13T20:25:50.769085977Z" level=info msg="StopPodSandbox for \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\"" Feb 13 20:25:50.769201 containerd[1529]: time="2025-02-13T20:25:50.769191471Z" level=info msg="Ensure that sandbox b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5 in task-service has been cleanup successfully" Feb 13 20:25:50.773957 containerd[1529]: time="2025-02-13T20:25:50.773943980Z" level=info msg="StopPodSandbox for \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\"" Feb 13 20:25:50.774347 containerd[1529]: time="2025-02-13T20:25:50.774031117Z" level=info msg="StopPodSandbox for \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\"" Feb 13 20:25:50.774416 containerd[1529]: time="2025-02-13T20:25:50.774042360Z" level=info msg="StopPodSandbox for \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\"" Feb 13 20:25:50.774472 containerd[1529]: time="2025-02-13T20:25:50.774461991Z" level=info msg="Ensure that sandbox 7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86 in task-service has been cleanup successfully" Feb 13 20:25:50.774866 containerd[1529]: time="2025-02-13T20:25:50.769019828Z" level=info msg="Ensure that sandbox a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492 in task-service has been cleanup successfully" Feb 13 20:25:50.775010 containerd[1529]: time="2025-02-13T20:25:50.774912562Z" level=info msg="Ensure that sandbox 5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c in task-service has been cleanup successfully" Feb 13 20:25:50.776494 containerd[1529]: time="2025-02-13T20:25:50.776472938Z" level=info msg="Ensure that sandbox c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5 in task-service has been cleanup successfully" Feb 13 20:25:50.834281 containerd[1529]: time="2025-02-13T20:25:50.834242634Z" level=error msg="StopPodSandbox for \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\" failed" error="failed to destroy network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:50.836948 containerd[1529]: time="2025-02-13T20:25:50.836925961Z" level=error msg="StopPodSandbox for \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\" failed" error="failed to destroy network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:50.840155 containerd[1529]: time="2025-02-13T20:25:50.840124423Z" level=error msg="StopPodSandbox for \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\" failed" error="failed to destroy network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:50.841832 containerd[1529]: time="2025-02-13T20:25:50.841759479Z" level=error msg="StopPodSandbox for \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\" failed" error="failed to destroy network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:50.846737 containerd[1529]: time="2025-02-13T20:25:50.846655187Z" level=error msg="StopPodSandbox for \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\" failed" error="failed to destroy network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:50.849436 containerd[1529]: time="2025-02-13T20:25:50.849381167Z" level=error msg="StopPodSandbox for \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\" failed" error="failed to destroy network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:25:50.858769 kubelet[2732]: E0213 20:25:50.858731 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:25:50.858864 kubelet[2732]: E0213 20:25:50.858787 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5"} Feb 13 20:25:50.858864 kubelet[2732]: E0213 20:25:50.858835 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f69e3431-9b6e-466d-b7b7-3df2f65cb067\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:25:50.858864 kubelet[2732]: E0213 20:25:50.858850 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f69e3431-9b6e-466d-b7b7-3df2f65cb067\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fdd49f766-8k5nw" podUID="f69e3431-9b6e-466d-b7b7-3df2f65cb067" Feb 13 20:25:50.858957 kubelet[2732]: E0213 20:25:50.858871 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:25:50.858957 kubelet[2732]: E0213 20:25:50.858881 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c"} Feb 13 20:25:50.858957 kubelet[2732]: E0213 20:25:50.858892 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e18627c8-7f32-45f6-9afb-f589410799df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:25:50.858957 kubelet[2732]: E0213 20:25:50.858902 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e18627c8-7f32-45f6-9afb-f589410799df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-tjrfw" podUID="e18627c8-7f32-45f6-9afb-f589410799df" Feb 13 20:25:50.859055 kubelet[2732]: E0213 20:25:50.858917 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:25:50.859055 kubelet[2732]: E0213 20:25:50.858925 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977"} Feb 13 20:25:50.865861 kubelet[2732]: E0213 20:25:50.837057 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:25:50.865861 kubelet[2732]: E0213 20:25:50.865707 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492"} Feb 13 20:25:50.865861 kubelet[2732]: E0213 20:25:50.865728 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7d7c602-e34f-4d46-9a2c-8104daaf8055\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:25:50.865861 kubelet[2732]: E0213 20:25:50.865746 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7d7c602-e34f-4d46-9a2c-8104daaf8055\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-j7gxr" podUID="f7d7c602-e34f-4d46-9a2c-8104daaf8055" Feb 13 20:25:50.866011 kubelet[2732]: E0213 20:25:50.834410 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:25:50.866011 kubelet[2732]: E0213 20:25:50.865770 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5"} Feb 13 20:25:50.866011 kubelet[2732]: E0213 20:25:50.865784 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"348324a7-a185-4686-9c4e-5b8f58c85d3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:25:50.866011 kubelet[2732]: E0213 20:25:50.865794 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"348324a7-a185-4686-9c4e-5b8f58c85d3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f9bb95f-nhppb" podUID="348324a7-a185-4686-9c4e-5b8f58c85d3b" Feb 13 20:25:50.866146 kubelet[2732]: E0213 20:25:50.865816 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:25:50.866146 kubelet[2732]: E0213 20:25:50.865826 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86"} Feb 13 20:25:50.866146 kubelet[2732]: E0213 20:25:50.865837 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b708f66b-cea3-4952-b290-318b6ed2fb1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:25:50.866146 kubelet[2732]: E0213 20:25:50.865846 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b708f66b-cea3-4952-b290-318b6ed2fb1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cntw8" podUID="b708f66b-cea3-4952-b290-318b6ed2fb1d" Feb 13 20:25:50.888558 kubelet[2732]: E0213 20:25:50.858939 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e92037f-96bf-409f-a057-890289c6e006\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:25:50.888696 kubelet[2732]: E0213 20:25:50.888575 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e92037f-96bf-409f-a057-890289c6e006\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fdd49f766-hwtb2" podUID="2e92037f-96bf-409f-a057-890289c6e006" Feb 13 20:25:52.881803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978390229.mount: Deactivated successfully. Feb 13 20:25:53.063229 containerd[1529]: time="2025-02-13T20:25:53.055973073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:53.071309 containerd[1529]: time="2025-02-13T20:25:53.071227451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:25:53.080175 containerd[1529]: time="2025-02-13T20:25:53.080137224Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:53.081183 containerd[1529]: time="2025-02-13T20:25:53.081156343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:25:53.085578 containerd[1529]: time="2025-02-13T20:25:53.083285914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.760072386s" Feb 13 20:25:53.085578 containerd[1529]: time="2025-02-13T20:25:53.083308251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:25:53.129036 containerd[1529]: time="2025-02-13T20:25:53.129004357Z" level=info msg="CreateContainer within sandbox \"2dd46dabd6541d5c441aaf1bc7f6234431dc5dcc64836adc8fb7463ce036f1ad\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:25:53.209737 containerd[1529]: time="2025-02-13T20:25:53.209677096Z" level=info msg="CreateContainer within sandbox \"2dd46dabd6541d5c441aaf1bc7f6234431dc5dcc64836adc8fb7463ce036f1ad\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"66ae76fbf19ea4e632d369da7e5a28dfc8d2ac6459c4c15954a0ef65353b45f5\"" Feb 13 20:25:53.210086 containerd[1529]: time="2025-02-13T20:25:53.210070350Z" level=info msg="StartContainer for \"66ae76fbf19ea4e632d369da7e5a28dfc8d2ac6459c4c15954a0ef65353b45f5\"" Feb 13 20:25:53.264866 systemd[1]: Started cri-containerd-66ae76fbf19ea4e632d369da7e5a28dfc8d2ac6459c4c15954a0ef65353b45f5.scope - libcontainer container 66ae76fbf19ea4e632d369da7e5a28dfc8d2ac6459c4c15954a0ef65353b45f5. Feb 13 20:25:53.288270 containerd[1529]: time="2025-02-13T20:25:53.288246190Z" level=info msg="StartContainer for \"66ae76fbf19ea4e632d369da7e5a28dfc8d2ac6459c4c15954a0ef65353b45f5\" returns successfully" Feb 13 20:25:53.369663 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:25:53.372979 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:25:53.742228 kubelet[2732]: I0213 20:25:53.723118 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pwhfg" podStartSLOduration=1.535534779 podStartE2EDuration="20.543453802s" podCreationTimestamp="2025-02-13 20:25:33 +0000 UTC" firstStartedPulling="2025-02-13 20:25:34.078106074 +0000 UTC m=+11.390229138" lastFinishedPulling="2025-02-13 20:25:53.086025098 +0000 UTC m=+30.398148161" observedRunningTime="2025-02-13 20:25:53.524513603 +0000 UTC m=+30.836636670" watchObservedRunningTime="2025-02-13 20:25:53.543453802 +0000 UTC m=+30.855576885" Feb 13 20:25:55.264649 kernel: bpftool[4081]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:25:55.449245 systemd-networkd[1451]: vxlan.calico: Link UP Feb 13 20:25:55.449248 systemd-networkd[1451]: vxlan.calico: Gained carrier Feb 13 20:25:57.436768 systemd-networkd[1451]: vxlan.calico: Gained IPv6LL Feb 13 20:26:02.766504 containerd[1529]: time="2025-02-13T20:26:02.766437961Z" level=info msg="StopPodSandbox for \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\"" Feb 13 20:26:02.767954 containerd[1529]: time="2025-02-13T20:26:02.767930133Z" level=info msg="StopPodSandbox for \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\"" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:02.852 [INFO][4221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:02.853 [INFO][4221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" iface="eth0" netns="/var/run/netns/cni-c40c03fc-f02d-8b2d-4748-bf22d836e5c8" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:02.854 [INFO][4221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" iface="eth0" netns="/var/run/netns/cni-c40c03fc-f02d-8b2d-4748-bf22d836e5c8" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:02.856 [INFO][4221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" iface="eth0" netns="/var/run/netns/cni-c40c03fc-f02d-8b2d-4748-bf22d836e5c8" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:02.856 [INFO][4221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:02.856 [INFO][4221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:03.476 [INFO][4230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" HandleID="k8s-pod-network.b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:03.481 [INFO][4230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:03.481 [INFO][4230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:03.496 [WARNING][4230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" HandleID="k8s-pod-network.b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:03.496 [INFO][4230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" HandleID="k8s-pod-network.b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:03.497 [INFO][4230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:03.499507 containerd[1529]: 2025-02-13 20:26:03.498 [INFO][4221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:03.501186 systemd[1]: run-netns-cni\x2dc40c03fc\x2df02d\x2d8b2d\x2d4748\x2dbf22d836e5c8.mount: Deactivated successfully. Feb 13 20:26:03.505871 containerd[1529]: time="2025-02-13T20:26:03.505631787Z" level=info msg="TearDown network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\" successfully" Feb 13 20:26:03.505871 containerd[1529]: time="2025-02-13T20:26:03.505664006Z" level=info msg="StopPodSandbox for \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\" returns successfully" Feb 13 20:26:03.507291 containerd[1529]: time="2025-02-13T20:26:03.506974369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fdd49f766-8k5nw,Uid:f69e3431-9b6e-466d-b7b7-3df2f65cb067,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:02.854 [INFO][4209] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:02.855 [INFO][4209] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" iface="eth0" netns="/var/run/netns/cni-61ae39df-5d32-3a1f-6257-d715d3290c4c" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:02.855 [INFO][4209] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" iface="eth0" netns="/var/run/netns/cni-61ae39df-5d32-3a1f-6257-d715d3290c4c" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:02.856 [INFO][4209] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" iface="eth0" netns="/var/run/netns/cni-61ae39df-5d32-3a1f-6257-d715d3290c4c" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:02.856 [INFO][4209] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:02.856 [INFO][4209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:03.476 [INFO][4229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" HandleID="k8s-pod-network.7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:03.481 [INFO][4229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:03.497 [INFO][4229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:03.505 [WARNING][4229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" HandleID="k8s-pod-network.7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:03.505 [INFO][4229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" HandleID="k8s-pod-network.7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:03.506 [INFO][4229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:03.511180 containerd[1529]: 2025-02-13 20:26:03.509 [INFO][4209] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:03.514149 containerd[1529]: time="2025-02-13T20:26:03.511333907Z" level=info msg="TearDown network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\" successfully" Feb 13 20:26:03.514149 containerd[1529]: time="2025-02-13T20:26:03.511350014Z" level=info msg="StopPodSandbox for \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\" returns successfully" Feb 13 20:26:03.514149 containerd[1529]: time="2025-02-13T20:26:03.511806489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cntw8,Uid:b708f66b-cea3-4952-b290-318b6ed2fb1d,Namespace:calico-system,Attempt:1,}" Feb 13 20:26:03.513710 systemd[1]: run-netns-cni\x2d61ae39df\x2d5d32\x2d3a1f\x2d6257\x2dd715d3290c4c.mount: Deactivated successfully. Feb 13 20:26:03.763793 containerd[1529]: time="2025-02-13T20:26:03.763559616Z" level=info msg="StopPodSandbox for \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\"" Feb 13 20:26:03.764257 containerd[1529]: time="2025-02-13T20:26:03.764152720Z" level=info msg="StopPodSandbox for \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\"" Feb 13 20:26:03.791835 systemd-networkd[1451]: cali2528047d1ee: Link UP Feb 13 20:26:03.793578 systemd-networkd[1451]: cali2528047d1ee: Gained carrier Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.594 [INFO][4247] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cntw8-eth0 csi-node-driver- calico-system b708f66b-cea3-4952-b290-318b6ed2fb1d 743 0 2025-02-13 20:25:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cntw8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2528047d1ee [] []}} ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Namespace="calico-system" Pod="csi-node-driver-cntw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cntw8-" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.594 [INFO][4247] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Namespace="calico-system" Pod="csi-node-driver-cntw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.638 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" HandleID="k8s-pod-network.c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.662 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" HandleID="k8s-pod-network.c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310b50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cntw8", "timestamp":"2025-02-13 20:26:03.638468463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.662 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.662 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.662 [INFO][4266] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.669 [INFO][4266] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" host="localhost" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.748 [INFO][4266] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.753 [INFO][4266] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.755 [INFO][4266] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.757 [INFO][4266] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.757 [INFO][4266] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" host="localhost" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.759 [INFO][4266] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.766 [INFO][4266] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" host="localhost" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.775 [INFO][4266] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" host="localhost" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.776 [INFO][4266] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" host="localhost" Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.776 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:03.829017 containerd[1529]: 2025-02-13 20:26:03.776 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" HandleID="k8s-pod-network.c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.829990 containerd[1529]: 2025-02-13 20:26:03.783 [INFO][4247] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Namespace="calico-system" Pod="csi-node-driver-cntw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cntw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cntw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b708f66b-cea3-4952-b290-318b6ed2fb1d", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cntw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2528047d1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:03.829990 containerd[1529]: 2025-02-13 20:26:03.784 [INFO][4247] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Namespace="calico-system" Pod="csi-node-driver-cntw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.829990 containerd[1529]: 2025-02-13 20:26:03.784 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2528047d1ee ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Namespace="calico-system" Pod="csi-node-driver-cntw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.829990 containerd[1529]: 2025-02-13 20:26:03.794 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Namespace="calico-system" Pod="csi-node-driver-cntw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.829990 containerd[1529]: 2025-02-13 20:26:03.795 [INFO][4247] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Namespace="calico-system" Pod="csi-node-driver-cntw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cntw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cntw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b708f66b-cea3-4952-b290-318b6ed2fb1d", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c", Pod:"csi-node-driver-cntw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2528047d1ee", MAC:"b6:0e:00:f4:46:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:03.829990 containerd[1529]: 2025-02-13 20:26:03.819 [INFO][4247] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c" Namespace="calico-system" Pod="csi-node-driver-cntw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:03.899218 containerd[1529]: time="2025-02-13T20:26:03.898611989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:26:03.899218 containerd[1529]: time="2025-02-13T20:26:03.898668879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:26:03.899218 containerd[1529]: time="2025-02-13T20:26:03.898676629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:03.899218 containerd[1529]: time="2025-02-13T20:26:03.898741107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:03.915408 systemd-networkd[1451]: calif7725c4e6b0: Link UP Feb 13 20:26:03.917704 systemd-networkd[1451]: calif7725c4e6b0: Gained carrier Feb 13 20:26:03.947894 systemd[1]: Started cri-containerd-c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c.scope - libcontainer container c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c. Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.617 [INFO][4253] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0 calico-apiserver-5fdd49f766- calico-apiserver f69e3431-9b6e-466d-b7b7-3df2f65cb067 742 0 2025-02-13 20:25:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fdd49f766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fdd49f766-8k5nw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7725c4e6b0 [] []}} ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-8k5nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.618 [INFO][4253] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-8k5nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.661 [INFO][4271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" HandleID="k8s-pod-network.0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.669 [INFO][4271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" HandleID="k8s-pod-network.0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fdd49f766-8k5nw", "timestamp":"2025-02-13 20:26:03.661326044 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.669 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.777 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.777 [INFO][4271] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.783 [INFO][4271] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" host="localhost" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.851 [INFO][4271] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.861 [INFO][4271] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.869 [INFO][4271] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.878 [INFO][4271] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.878 [INFO][4271] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" host="localhost" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.888 [INFO][4271] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68 Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.896 [INFO][4271] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" host="localhost" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.907 [INFO][4271] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" host="localhost" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.907 [INFO][4271] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" host="localhost" Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.907 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:03.953835 containerd[1529]: 2025-02-13 20:26:03.907 [INFO][4271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" HandleID="k8s-pod-network.0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.956186 containerd[1529]: 2025-02-13 20:26:03.910 [INFO][4253] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-8k5nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0", GenerateName:"calico-apiserver-5fdd49f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"f69e3431-9b6e-466d-b7b7-3df2f65cb067", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fdd49f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fdd49f766-8k5nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7725c4e6b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:03.956186 containerd[1529]: 2025-02-13 20:26:03.910 [INFO][4253] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-8k5nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.956186 containerd[1529]: 2025-02-13 20:26:03.910 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7725c4e6b0 ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-8k5nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.956186 containerd[1529]: 2025-02-13 20:26:03.918 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-8k5nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.956186 containerd[1529]: 2025-02-13 20:26:03.919 [INFO][4253] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-8k5nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0", GenerateName:"calico-apiserver-5fdd49f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"f69e3431-9b6e-466d-b7b7-3df2f65cb067", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fdd49f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68", Pod:"calico-apiserver-5fdd49f766-8k5nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7725c4e6b0", MAC:"2e:2e:0b:b4:ae:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:03.956186 containerd[1529]: 2025-02-13 20:26:03.943 [INFO][4253] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-8k5nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:03.983627 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.876 [INFO][4306] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.877 [INFO][4306] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" iface="eth0" netns="/var/run/netns/cni-026114a3-6312-fb5f-6f3d-2eed7fc0eda8" Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.877 [INFO][4306] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" iface="eth0" netns="/var/run/netns/cni-026114a3-6312-fb5f-6f3d-2eed7fc0eda8" Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.878 [INFO][4306] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" iface="eth0" netns="/var/run/netns/cni-026114a3-6312-fb5f-6f3d-2eed7fc0eda8" Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.878 [INFO][4306] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.878 [INFO][4306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.979 [INFO][4340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" HandleID="k8s-pod-network.344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.979 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.980 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.991 [WARNING][4340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" HandleID="k8s-pod-network.344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.992 [INFO][4340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" HandleID="k8s-pod-network.344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.996 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:04.005989 containerd[1529]: 2025-02-13 20:26:03.997 [INFO][4306] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:04.012435 containerd[1529]: time="2025-02-13T20:26:04.008123124Z" level=info msg="TearDown network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\" successfully" Feb 13 20:26:04.012435 containerd[1529]: time="2025-02-13T20:26:04.008468791Z" level=info msg="StopPodSandbox for \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\" returns successfully" Feb 13 20:26:04.012435 containerd[1529]: time="2025-02-13T20:26:04.010655603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fdd49f766-hwtb2,Uid:2e92037f-96bf-409f-a057-890289c6e006,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:03.866 [INFO][4305] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:03.867 [INFO][4305] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" iface="eth0" netns="/var/run/netns/cni-82ceaac4-e439-7750-a3da-765702f262fb" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:03.867 [INFO][4305] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" iface="eth0" netns="/var/run/netns/cni-82ceaac4-e439-7750-a3da-765702f262fb" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:03.868 [INFO][4305] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" iface="eth0" netns="/var/run/netns/cni-82ceaac4-e439-7750-a3da-765702f262fb" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:03.868 [INFO][4305] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:03.868 [INFO][4305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:03.990 [INFO][4334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" HandleID="k8s-pod-network.c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:03.992 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:03.997 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:04.005 [WARNING][4334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" HandleID="k8s-pod-network.c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:04.005 [INFO][4334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" HandleID="k8s-pod-network.c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:04.013 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:04.026263 containerd[1529]: 2025-02-13 20:26:04.023 [INFO][4305] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:04.027415 containerd[1529]: time="2025-02-13T20:26:04.026983815Z" level=info msg="TearDown network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\" successfully" Feb 13 20:26:04.027415 containerd[1529]: time="2025-02-13T20:26:04.027006208Z" level=info msg="StopPodSandbox for \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\" returns successfully" Feb 13 20:26:04.028158 containerd[1529]: time="2025-02-13T20:26:04.027981556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f9bb95f-nhppb,Uid:348324a7-a185-4686-9c4e-5b8f58c85d3b,Namespace:calico-system,Attempt:1,}" Feb 13 20:26:04.037873 containerd[1529]: time="2025-02-13T20:26:04.037742964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cntw8,Uid:b708f66b-cea3-4952-b290-318b6ed2fb1d,Namespace:calico-system,Attempt:1,} returns sandbox id \"c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c\"" Feb 13 20:26:04.038991 containerd[1529]: time="2025-02-13T20:26:04.038536669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:26:04.038991 containerd[1529]: time="2025-02-13T20:26:04.038783615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:26:04.038991 containerd[1529]: time="2025-02-13T20:26:04.038795700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:04.038991 containerd[1529]: time="2025-02-13T20:26:04.038976592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:04.040991 containerd[1529]: time="2025-02-13T20:26:04.040522804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:26:04.078974 systemd[1]: Started cri-containerd-0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68.scope - libcontainer container 0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68. Feb 13 20:26:04.106602 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:26:04.235576 containerd[1529]: time="2025-02-13T20:26:04.235463050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fdd49f766-8k5nw,Uid:f69e3431-9b6e-466d-b7b7-3df2f65cb067,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68\"" Feb 13 20:26:04.351125 systemd-networkd[1451]: cali0ebde063fc4: Link UP Feb 13 20:26:04.352172 systemd-networkd[1451]: cali0ebde063fc4: Gained carrier Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.159 [INFO][4449] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0 calico-kube-controllers-67f9bb95f- calico-system 348324a7-a185-4686-9c4e-5b8f58c85d3b 752 0 2025-02-13 20:25:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67f9bb95f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67f9bb95f-nhppb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0ebde063fc4 [] []}} ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Namespace="calico-system" Pod="calico-kube-controllers-67f9bb95f-nhppb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.160 [INFO][4449] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Namespace="calico-system" Pod="calico-kube-controllers-67f9bb95f-nhppb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.216 [INFO][4462] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" HandleID="k8s-pod-network.bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.233 [INFO][4462] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" HandleID="k8s-pod-network.bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ce0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67f9bb95f-nhppb", "timestamp":"2025-02-13 20:26:04.215976738 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.233 [INFO][4462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.233 [INFO][4462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.233 [INFO][4462] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.236 [INFO][4462] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" host="localhost" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.329 [INFO][4462] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.334 [INFO][4462] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.336 [INFO][4462] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.338 [INFO][4462] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.338 [INFO][4462] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" host="localhost" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.339 [INFO][4462] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.341 [INFO][4462] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" host="localhost" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.346 [INFO][4462] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" host="localhost" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.346 [INFO][4462] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" host="localhost" Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.346 [INFO][4462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:04.365379 containerd[1529]: 2025-02-13 20:26:04.346 [INFO][4462] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" HandleID="k8s-pod-network.bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.366586 containerd[1529]: 2025-02-13 20:26:04.348 [INFO][4449] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Namespace="calico-system" Pod="calico-kube-controllers-67f9bb95f-nhppb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0", GenerateName:"calico-kube-controllers-67f9bb95f-", Namespace:"calico-system", SelfLink:"", UID:"348324a7-a185-4686-9c4e-5b8f58c85d3b", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f9bb95f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67f9bb95f-nhppb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ebde063fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:04.366586 containerd[1529]: 2025-02-13 20:26:04.348 [INFO][4449] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Namespace="calico-system" Pod="calico-kube-controllers-67f9bb95f-nhppb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.366586 containerd[1529]: 2025-02-13 20:26:04.348 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ebde063fc4 ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Namespace="calico-system" Pod="calico-kube-controllers-67f9bb95f-nhppb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.366586 containerd[1529]: 2025-02-13 20:26:04.352 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Namespace="calico-system" Pod="calico-kube-controllers-67f9bb95f-nhppb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.366586 containerd[1529]: 2025-02-13 20:26:04.353 [INFO][4449] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Namespace="calico-system" Pod="calico-kube-controllers-67f9bb95f-nhppb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0", GenerateName:"calico-kube-controllers-67f9bb95f-", Namespace:"calico-system", SelfLink:"", UID:"348324a7-a185-4686-9c4e-5b8f58c85d3b", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f9bb95f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a", Pod:"calico-kube-controllers-67f9bb95f-nhppb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ebde063fc4", MAC:"66:7f:e3:60:1d:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:04.366586 containerd[1529]: 2025-02-13 20:26:04.363 [INFO][4449] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a" Namespace="calico-system" Pod="calico-kube-controllers-67f9bb95f-nhppb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:04.382336 containerd[1529]: time="2025-02-13T20:26:04.382210461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:26:04.382336 containerd[1529]: time="2025-02-13T20:26:04.382268295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:26:04.382336 containerd[1529]: time="2025-02-13T20:26:04.382279762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:04.382965 containerd[1529]: time="2025-02-13T20:26:04.382351215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:04.395825 systemd[1]: Started cri-containerd-bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a.scope - libcontainer container bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a. Feb 13 20:26:04.404841 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:26:04.429008 containerd[1529]: time="2025-02-13T20:26:04.428949801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f9bb95f-nhppb,Uid:348324a7-a185-4686-9c4e-5b8f58c85d3b,Namespace:calico-system,Attempt:1,} returns sandbox id \"bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a\"" Feb 13 20:26:04.455074 systemd-networkd[1451]: cali11495d770d3: Link UP Feb 13 20:26:04.456165 systemd-networkd[1451]: cali11495d770d3: Gained carrier Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.184 [INFO][4438] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0 calico-apiserver-5fdd49f766- calico-apiserver 2e92037f-96bf-409f-a057-890289c6e006 753 0 2025-02-13 20:25:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fdd49f766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fdd49f766-hwtb2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali11495d770d3 [] []}} ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-hwtb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.184 [INFO][4438] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-hwtb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.233 [INFO][4467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" HandleID="k8s-pod-network.83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.329 [INFO][4467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" HandleID="k8s-pod-network.83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb330), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fdd49f766-hwtb2", "timestamp":"2025-02-13 20:26:04.233266012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.329 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.346 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.346 [INFO][4467] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.348 [INFO][4467] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" host="localhost" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.430 [INFO][4467] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.434 [INFO][4467] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.436 [INFO][4467] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.438 [INFO][4467] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.438 [INFO][4467] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" host="localhost" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.440 [INFO][4467] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7 Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.442 [INFO][4467] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" host="localhost" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.449 [INFO][4467] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" host="localhost" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.449 [INFO][4467] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" host="localhost" Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.449 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:04.467338 containerd[1529]: 2025-02-13 20:26:04.449 [INFO][4467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" HandleID="k8s-pod-network.83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.468032 containerd[1529]: 2025-02-13 20:26:04.451 [INFO][4438] cni-plugin/k8s.go 386: Populated endpoint ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-hwtb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0", GenerateName:"calico-apiserver-5fdd49f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e92037f-96bf-409f-a057-890289c6e006", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fdd49f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fdd49f766-hwtb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11495d770d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:04.468032 containerd[1529]: 2025-02-13 20:26:04.451 [INFO][4438] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-hwtb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.468032 containerd[1529]: 2025-02-13 20:26:04.451 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11495d770d3 ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-hwtb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.468032 containerd[1529]: 2025-02-13 20:26:04.454 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-hwtb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.468032 containerd[1529]: 2025-02-13 20:26:04.454 [INFO][4438] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-hwtb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0", GenerateName:"calico-apiserver-5fdd49f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e92037f-96bf-409f-a057-890289c6e006", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fdd49f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7", Pod:"calico-apiserver-5fdd49f766-hwtb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11495d770d3", MAC:"36:8e:89:af:40:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:04.468032 containerd[1529]: 2025-02-13 20:26:04.462 [INFO][4438] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7" Namespace="calico-apiserver" Pod="calico-apiserver-5fdd49f766-hwtb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:04.488886 containerd[1529]: time="2025-02-13T20:26:04.488763614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:26:04.488886 containerd[1529]: time="2025-02-13T20:26:04.488816065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:26:04.488886 containerd[1529]: time="2025-02-13T20:26:04.488825346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:04.489299 containerd[1529]: time="2025-02-13T20:26:04.489098516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:04.505742 systemd[1]: Started cri-containerd-83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7.scope - libcontainer container 83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7. Feb 13 20:26:04.512182 systemd[1]: run-netns-cni\x2d82ceaac4\x2de439\x2d7750\x2da3da\x2d765702f262fb.mount: Deactivated successfully. Feb 13 20:26:04.512340 systemd[1]: run-netns-cni\x2d026114a3\x2d6312\x2dfb5f\x2d6f3d\x2d2eed7fc0eda8.mount: Deactivated successfully. Feb 13 20:26:04.519256 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:26:04.541475 containerd[1529]: time="2025-02-13T20:26:04.541450233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fdd49f766-hwtb2,Uid:2e92037f-96bf-409f-a057-890289c6e006,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7\"" Feb 13 20:26:04.860843 systemd-networkd[1451]: cali2528047d1ee: Gained IPv6LL Feb 13 20:26:05.684127 containerd[1529]: time="2025-02-13T20:26:05.684073841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:05.692241 containerd[1529]: time="2025-02-13T20:26:05.692201758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:26:05.693041 containerd[1529]: time="2025-02-13T20:26:05.693015654Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:05.694155 containerd[1529]: time="2025-02-13T20:26:05.694132423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:05.694629 containerd[1529]: time="2025-02-13T20:26:05.694521660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.653966615s" Feb 13 20:26:05.694629 containerd[1529]: time="2025-02-13T20:26:05.694540529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:26:05.695343 containerd[1529]: time="2025-02-13T20:26:05.695327238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:26:05.700535 containerd[1529]: time="2025-02-13T20:26:05.700512844Z" level=info msg="CreateContainer within sandbox \"c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:26:05.729334 containerd[1529]: time="2025-02-13T20:26:05.729299219Z" level=info msg="CreateContainer within sandbox \"c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b6b7d34a82b2b890126cf11b63a86b8299603ae14516b4413af930c4006e1b0f\"" Feb 13 20:26:05.730009 containerd[1529]: time="2025-02-13T20:26:05.729726921Z" level=info msg="StartContainer for \"b6b7d34a82b2b890126cf11b63a86b8299603ae14516b4413af930c4006e1b0f\"" Feb 13 20:26:05.758693 systemd[1]: Started cri-containerd-b6b7d34a82b2b890126cf11b63a86b8299603ae14516b4413af930c4006e1b0f.scope - libcontainer container b6b7d34a82b2b890126cf11b63a86b8299603ae14516b4413af930c4006e1b0f. Feb 13 20:26:05.766987 containerd[1529]: time="2025-02-13T20:26:05.766963312Z" level=info msg="StopPodSandbox for \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\"" Feb 13 20:26:05.767184 containerd[1529]: time="2025-02-13T20:26:05.767163759Z" level=info msg="StopPodSandbox for \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\"" Feb 13 20:26:05.805316 containerd[1529]: time="2025-02-13T20:26:05.805288442Z" level=info msg="StartContainer for \"b6b7d34a82b2b890126cf11b63a86b8299603ae14516b4413af930c4006e1b0f\" returns successfully" Feb 13 20:26:05.820715 systemd-networkd[1451]: calif7725c4e6b0: Gained IPv6LL Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.836 [INFO][4644] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.836 [INFO][4644] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" iface="eth0" netns="/var/run/netns/cni-b14f3f68-2ad0-b302-7834-b6141cb6f580" Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.837 [INFO][4644] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" iface="eth0" netns="/var/run/netns/cni-b14f3f68-2ad0-b302-7834-b6141cb6f580" Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.837 [INFO][4644] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" iface="eth0" netns="/var/run/netns/cni-b14f3f68-2ad0-b302-7834-b6141cb6f580" Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.837 [INFO][4644] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.837 [INFO][4644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.878 [INFO][4668] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" HandleID="k8s-pod-network.5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.880 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.880 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.890 [WARNING][4668] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" HandleID="k8s-pod-network.5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.890 [INFO][4668] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" HandleID="k8s-pod-network.5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.891 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:05.894400 containerd[1529]: 2025-02-13 20:26:05.892 [INFO][4644] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:05.896523 containerd[1529]: time="2025-02-13T20:26:05.896437190Z" level=info msg="TearDown network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\" successfully" Feb 13 20:26:05.896523 containerd[1529]: time="2025-02-13T20:26:05.896462582Z" level=info msg="StopPodSandbox for \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\" returns successfully" Feb 13 20:26:05.896836 systemd[1]: run-netns-cni\x2db14f3f68\x2d2ad0\x2db302\x2d7834\x2db6141cb6f580.mount: Deactivated successfully. Feb 13 20:26:05.900789 containerd[1529]: time="2025-02-13T20:26:05.900757471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tjrfw,Uid:e18627c8-7f32-45f6-9afb-f589410799df,Namespace:kube-system,Attempt:1,}" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.840 [INFO][4648] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.841 [INFO][4648] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" iface="eth0" netns="/var/run/netns/cni-a309f389-bb90-a100-dee5-607d9b494965" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.841 [INFO][4648] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" iface="eth0" netns="/var/run/netns/cni-a309f389-bb90-a100-dee5-607d9b494965" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.841 [INFO][4648] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" iface="eth0" netns="/var/run/netns/cni-a309f389-bb90-a100-dee5-607d9b494965" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.841 [INFO][4648] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.841 [INFO][4648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.885 [INFO][4672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" HandleID="k8s-pod-network.a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.886 [INFO][4672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.891 [INFO][4672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.900 [WARNING][4672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" HandleID="k8s-pod-network.a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.900 [INFO][4672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" HandleID="k8s-pod-network.a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.901 [INFO][4672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:05.907666 containerd[1529]: 2025-02-13 20:26:05.904 [INFO][4648] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:05.909582 containerd[1529]: time="2025-02-13T20:26:05.907915862Z" level=info msg="TearDown network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\" successfully" Feb 13 20:26:05.909582 containerd[1529]: time="2025-02-13T20:26:05.907932019Z" level=info msg="StopPodSandbox for \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\" returns successfully" Feb 13 20:26:05.909582 containerd[1529]: time="2025-02-13T20:26:05.908359640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j7gxr,Uid:f7d7c602-e34f-4d46-9a2c-8104daaf8055,Namespace:kube-system,Attempt:1,}" Feb 13 20:26:06.012763 systemd-networkd[1451]: cali11495d770d3: Gained IPv6LL Feb 13 20:26:06.082770 systemd-networkd[1451]: cali1ffae4ad6fb: Link UP Feb 13 20:26:06.083170 systemd-networkd[1451]: cali1ffae4ad6fb: Gained carrier Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:05.947 [INFO][4680] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0 coredns-6f6b679f8f- kube-system e18627c8-7f32-45f6-9afb-f589410799df 775 0 2025-02-13 20:25:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-tjrfw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ffae4ad6fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-tjrfw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tjrfw-" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:05.947 [INFO][4680] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-tjrfw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:05.977 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" HandleID="k8s-pod-network.a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:05.989 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" HandleID="k8s-pod-network.a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ed0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-tjrfw", "timestamp":"2025-02-13 20:26:05.977724147 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:05.990 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:05.990 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:05.990 [INFO][4703] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:05.992 [INFO][4703] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" host="localhost" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:05.995 [INFO][4703] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.002 [INFO][4703] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.003 [INFO][4703] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.004 [INFO][4703] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.004 [INFO][4703] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" host="localhost" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.005 [INFO][4703] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1 Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.032 [INFO][4703] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" host="localhost" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.078 [INFO][4703] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" host="localhost" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.078 [INFO][4703] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" host="localhost" Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.078 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:06.101769 containerd[1529]: 2025-02-13 20:26:06.078 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" HandleID="k8s-pod-network.a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:06.102252 containerd[1529]: 2025-02-13 20:26:06.080 [INFO][4680] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-tjrfw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e18627c8-7f32-45f6-9afb-f589410799df", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-tjrfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ffae4ad6fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:06.102252 containerd[1529]: 2025-02-13 20:26:06.080 [INFO][4680] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-tjrfw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:06.102252 containerd[1529]: 2025-02-13 20:26:06.080 [INFO][4680] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ffae4ad6fb ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-tjrfw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:06.102252 containerd[1529]: 2025-02-13 20:26:06.083 [INFO][4680] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-tjrfw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:06.102252 containerd[1529]: 2025-02-13 20:26:06.083 [INFO][4680] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-tjrfw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e18627c8-7f32-45f6-9afb-f589410799df", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1", Pod:"coredns-6f6b679f8f-tjrfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ffae4ad6fb", MAC:"56:e0:7c:ed:32:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:06.102252 containerd[1529]: 2025-02-13 20:26:06.098 [INFO][4680] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-tjrfw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:06.122948 containerd[1529]: time="2025-02-13T20:26:06.122218733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:26:06.122948 containerd[1529]: time="2025-02-13T20:26:06.122270009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:26:06.122948 containerd[1529]: time="2025-02-13T20:26:06.122285181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:06.122948 containerd[1529]: time="2025-02-13T20:26:06.122701991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:06.138476 systemd-networkd[1451]: cali0e871bb45ee: Link UP Feb 13 20:26:06.140064 systemd[1]: Started cri-containerd-a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1.scope - libcontainer container a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1. Feb 13 20:26:06.140363 systemd-networkd[1451]: cali0e871bb45ee: Gained carrier Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:05.960 [INFO][4690] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0 coredns-6f6b679f8f- kube-system f7d7c602-e34f-4d46-9a2c-8104daaf8055 777 0 2025-02-13 20:25:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-j7gxr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e871bb45ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Namespace="kube-system" Pod="coredns-6f6b679f8f-j7gxr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--j7gxr-" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:05.960 [INFO][4690] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Namespace="kube-system" Pod="coredns-6f6b679f8f-j7gxr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:05.994 [INFO][4708] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" HandleID="k8s-pod-network.faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.000 [INFO][4708] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" HandleID="k8s-pod-network.faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011be00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-j7gxr", "timestamp":"2025-02-13 20:26:05.994989758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.000 [INFO][4708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.078 [INFO][4708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.079 [INFO][4708] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.092 [INFO][4708] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" host="localhost" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.102 [INFO][4708] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.107 [INFO][4708] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.109 [INFO][4708] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.112 [INFO][4708] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.112 [INFO][4708] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" host="localhost" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.113 [INFO][4708] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77 Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.116 [INFO][4708] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" host="localhost" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.124 [INFO][4708] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" host="localhost" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.124 [INFO][4708] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" host="localhost" Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.124 [INFO][4708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:06.152648 containerd[1529]: 2025-02-13 20:26:06.124 [INFO][4708] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" HandleID="k8s-pod-network.faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:06.153582 containerd[1529]: 2025-02-13 20:26:06.132 [INFO][4690] cni-plugin/k8s.go 386: Populated endpoint ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Namespace="kube-system" Pod="coredns-6f6b679f8f-j7gxr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7d7c602-e34f-4d46-9a2c-8104daaf8055", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-j7gxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e871bb45ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:06.153582 containerd[1529]: 2025-02-13 20:26:06.134 [INFO][4690] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Namespace="kube-system" Pod="coredns-6f6b679f8f-j7gxr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:06.153582 containerd[1529]: 2025-02-13 20:26:06.134 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e871bb45ee ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Namespace="kube-system" Pod="coredns-6f6b679f8f-j7gxr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:06.153582 containerd[1529]: 2025-02-13 20:26:06.141 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Namespace="kube-system" Pod="coredns-6f6b679f8f-j7gxr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:06.153582 containerd[1529]: 2025-02-13 20:26:06.141 [INFO][4690] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Namespace="kube-system" Pod="coredns-6f6b679f8f-j7gxr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7d7c602-e34f-4d46-9a2c-8104daaf8055", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77", Pod:"coredns-6f6b679f8f-j7gxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e871bb45ee", MAC:"3a:f5:90:22:da:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:06.153582 containerd[1529]: 2025-02-13 20:26:06.150 [INFO][4690] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77" Namespace="kube-system" Pod="coredns-6f6b679f8f-j7gxr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:06.160194 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:26:06.179230 containerd[1529]: time="2025-02-13T20:26:06.179148732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:26:06.179230 containerd[1529]: time="2025-02-13T20:26:06.179194800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:26:06.179230 containerd[1529]: time="2025-02-13T20:26:06.179205516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:06.180007 containerd[1529]: time="2025-02-13T20:26:06.179322136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:26:06.193316 containerd[1529]: time="2025-02-13T20:26:06.193228224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tjrfw,Uid:e18627c8-7f32-45f6-9afb-f589410799df,Namespace:kube-system,Attempt:1,} returns sandbox id \"a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1\"" Feb 13 20:26:06.197771 systemd[1]: Started cri-containerd-faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77.scope - libcontainer container faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77. Feb 13 20:26:06.203369 containerd[1529]: time="2025-02-13T20:26:06.199572066Z" level=info msg="CreateContainer within sandbox \"a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:26:06.208338 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:26:06.234123 containerd[1529]: time="2025-02-13T20:26:06.234054239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j7gxr,Uid:f7d7c602-e34f-4d46-9a2c-8104daaf8055,Namespace:kube-system,Attempt:1,} returns sandbox id \"faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77\"" Feb 13 20:26:06.236790 containerd[1529]: time="2025-02-13T20:26:06.236761969Z" level=info msg="CreateContainer within sandbox \"faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:26:06.268863 systemd-networkd[1451]: cali0ebde063fc4: Gained IPv6LL Feb 13 20:26:06.275628 containerd[1529]: time="2025-02-13T20:26:06.275452213Z" level=info msg="CreateContainer within sandbox \"a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f5cdb79ff084e14a9f41c8a396c086c4714bfe90ad1f93099eb87a32a84fa98\"" Feb 13 20:26:06.275895 containerd[1529]: time="2025-02-13T20:26:06.275875938Z" level=info msg="CreateContainer within sandbox \"faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3da04f86d82c1cac35d01c6b482de74b367e19178a906b570baa926e6f215fc5\"" Feb 13 20:26:06.276473 containerd[1529]: time="2025-02-13T20:26:06.276130738Z" level=info msg="StartContainer for \"3da04f86d82c1cac35d01c6b482de74b367e19178a906b570baa926e6f215fc5\"" Feb 13 20:26:06.277005 containerd[1529]: time="2025-02-13T20:26:06.276134262Z" level=info msg="StartContainer for \"0f5cdb79ff084e14a9f41c8a396c086c4714bfe90ad1f93099eb87a32a84fa98\"" Feb 13 20:26:06.298752 systemd[1]: Started cri-containerd-3da04f86d82c1cac35d01c6b482de74b367e19178a906b570baa926e6f215fc5.scope - libcontainer container 3da04f86d82c1cac35d01c6b482de74b367e19178a906b570baa926e6f215fc5. Feb 13 20:26:06.301203 systemd[1]: Started cri-containerd-0f5cdb79ff084e14a9f41c8a396c086c4714bfe90ad1f93099eb87a32a84fa98.scope - libcontainer container 0f5cdb79ff084e14a9f41c8a396c086c4714bfe90ad1f93099eb87a32a84fa98. Feb 13 20:26:06.330237 containerd[1529]: time="2025-02-13T20:26:06.330206613Z" level=info msg="StartContainer for \"3da04f86d82c1cac35d01c6b482de74b367e19178a906b570baa926e6f215fc5\" returns successfully" Feb 13 20:26:06.331441 containerd[1529]: time="2025-02-13T20:26:06.330268233Z" level=info msg="StartContainer for \"0f5cdb79ff084e14a9f41c8a396c086c4714bfe90ad1f93099eb87a32a84fa98\" returns successfully" Feb 13 20:26:06.626308 kubelet[2732]: I0213 20:26:06.626256 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-j7gxr" podStartSLOduration=39.626239309 podStartE2EDuration="39.626239309s" podCreationTimestamp="2025-02-13 20:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:26:06.601648263 +0000 UTC m=+43.913771327" watchObservedRunningTime="2025-02-13 20:26:06.626239309 +0000 UTC m=+43.938362376" Feb 13 20:26:06.720381 systemd[1]: run-netns-cni\x2da309f389\x2dbb90\x2da100\x2ddee5\x2d607d9b494965.mount: Deactivated successfully. Feb 13 20:26:07.356761 systemd-networkd[1451]: cali0e871bb45ee: Gained IPv6LL Feb 13 20:26:07.612851 systemd-networkd[1451]: cali1ffae4ad6fb: Gained IPv6LL Feb 13 20:26:07.620769 kubelet[2732]: I0213 20:26:07.620718 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tjrfw" podStartSLOduration=40.62070788 podStartE2EDuration="40.62070788s" podCreationTimestamp="2025-02-13 20:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:26:06.626010981 +0000 UTC m=+43.938134048" watchObservedRunningTime="2025-02-13 20:26:07.62070788 +0000 UTC m=+44.932830948" Feb 13 20:26:08.281485 containerd[1529]: time="2025-02-13T20:26:08.281443343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:08.282391 containerd[1529]: time="2025-02-13T20:26:08.282285663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:26:08.282908 containerd[1529]: time="2025-02-13T20:26:08.282886217Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:08.284388 containerd[1529]: time="2025-02-13T20:26:08.284329608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:08.285352 containerd[1529]: time="2025-02-13T20:26:08.284947160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.589597179s" Feb 13 20:26:08.285352 containerd[1529]: time="2025-02-13T20:26:08.284976273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:26:08.286444 containerd[1529]: time="2025-02-13T20:26:08.286159118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:26:08.288498 containerd[1529]: time="2025-02-13T20:26:08.288474777Z" level=info msg="CreateContainer within sandbox \"0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:26:08.328877 containerd[1529]: time="2025-02-13T20:26:08.328844125Z" level=info msg="CreateContainer within sandbox \"0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f279407059d3010f7360ed89be125c04998ea566094e4866031a557cb656cb8\"" Feb 13 20:26:08.329871 containerd[1529]: time="2025-02-13T20:26:08.329853977Z" level=info msg="StartContainer for \"5f279407059d3010f7360ed89be125c04998ea566094e4866031a557cb656cb8\"" Feb 13 20:26:08.366977 systemd[1]: run-containerd-runc-k8s.io-5f279407059d3010f7360ed89be125c04998ea566094e4866031a557cb656cb8-runc.ecj2eG.mount: Deactivated successfully. Feb 13 20:26:08.372734 systemd[1]: Started cri-containerd-5f279407059d3010f7360ed89be125c04998ea566094e4866031a557cb656cb8.scope - libcontainer container 5f279407059d3010f7360ed89be125c04998ea566094e4866031a557cb656cb8. Feb 13 20:26:08.405089 containerd[1529]: time="2025-02-13T20:26:08.405011720Z" level=info msg="StartContainer for \"5f279407059d3010f7360ed89be125c04998ea566094e4866031a557cb656cb8\" returns successfully" Feb 13 20:26:08.612047 kubelet[2732]: I0213 20:26:08.611952 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fdd49f766-8k5nw" podStartSLOduration=31.563633139 podStartE2EDuration="35.61185792s" podCreationTimestamp="2025-02-13 20:25:33 +0000 UTC" firstStartedPulling="2025-02-13 20:26:04.238052876 +0000 UTC m=+41.550175940" lastFinishedPulling="2025-02-13 20:26:08.286277647 +0000 UTC m=+45.598400721" observedRunningTime="2025-02-13 20:26:08.611605291 +0000 UTC m=+45.923728364" watchObservedRunningTime="2025-02-13 20:26:08.61185792 +0000 UTC m=+45.923980980" Feb 13 20:26:09.603491 kubelet[2732]: I0213 20:26:09.603462 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:26:10.945086 containerd[1529]: time="2025-02-13T20:26:10.944983996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:10.950777 containerd[1529]: time="2025-02-13T20:26:10.950744154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:26:10.957728 containerd[1529]: time="2025-02-13T20:26:10.957683642Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:10.966790 containerd[1529]: time="2025-02-13T20:26:10.966541407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:10.967132 containerd[1529]: time="2025-02-13T20:26:10.967105407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.680924779s" Feb 13 20:26:10.967170 containerd[1529]: time="2025-02-13T20:26:10.967136601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:26:10.968139 containerd[1529]: time="2025-02-13T20:26:10.968119895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:26:10.979382 containerd[1529]: time="2025-02-13T20:26:10.979323994Z" level=info msg="CreateContainer within sandbox \"bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:26:10.988497 containerd[1529]: time="2025-02-13T20:26:10.987695614Z" level=info msg="CreateContainer within sandbox \"bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"18ff327a8619900fd3bdab0f359fae5bfd09bfa491e16acaa11edb98e7c1d5b6\"" Feb 13 20:26:10.990521 containerd[1529]: time="2025-02-13T20:26:10.990502776Z" level=info msg="StartContainer for \"18ff327a8619900fd3bdab0f359fae5bfd09bfa491e16acaa11edb98e7c1d5b6\"" Feb 13 20:26:11.026762 systemd[1]: Started cri-containerd-18ff327a8619900fd3bdab0f359fae5bfd09bfa491e16acaa11edb98e7c1d5b6.scope - libcontainer container 18ff327a8619900fd3bdab0f359fae5bfd09bfa491e16acaa11edb98e7c1d5b6. Feb 13 20:26:11.057043 containerd[1529]: time="2025-02-13T20:26:11.057018711Z" level=info msg="StartContainer for \"18ff327a8619900fd3bdab0f359fae5bfd09bfa491e16acaa11edb98e7c1d5b6\" returns successfully" Feb 13 20:26:11.521720 containerd[1529]: time="2025-02-13T20:26:11.521684763Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:11.526565 containerd[1529]: time="2025-02-13T20:26:11.526407922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:26:11.535904 containerd[1529]: time="2025-02-13T20:26:11.535833188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 567.691006ms" Feb 13 20:26:11.535904 containerd[1529]: time="2025-02-13T20:26:11.535858511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:26:11.538793 containerd[1529]: time="2025-02-13T20:26:11.536648617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:26:11.543308 containerd[1529]: time="2025-02-13T20:26:11.543280617Z" level=info msg="CreateContainer within sandbox \"83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:26:11.678979 containerd[1529]: time="2025-02-13T20:26:11.678899771Z" level=info msg="CreateContainer within sandbox \"83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"14d930f81cc75f138e8df1ddf1ca5771deb5e5133d7a3855a1d2f950f0db4807\"" Feb 13 20:26:11.695362 containerd[1529]: time="2025-02-13T20:26:11.695319160Z" level=info msg="StartContainer for \"14d930f81cc75f138e8df1ddf1ca5771deb5e5133d7a3855a1d2f950f0db4807\"" Feb 13 20:26:11.739008 kubelet[2732]: I0213 20:26:11.737296 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67f9bb95f-nhppb" podStartSLOduration=32.199810783 podStartE2EDuration="38.737283297s" podCreationTimestamp="2025-02-13 20:25:33 +0000 UTC" firstStartedPulling="2025-02-13 20:26:04.430374513 +0000 UTC m=+41.742497577" lastFinishedPulling="2025-02-13 20:26:10.96784702 +0000 UTC m=+48.279970091" observedRunningTime="2025-02-13 20:26:11.653546051 +0000 UTC m=+48.965669128" watchObservedRunningTime="2025-02-13 20:26:11.737283297 +0000 UTC m=+49.049406362" Feb 13 20:26:11.752951 systemd[1]: Started cri-containerd-14d930f81cc75f138e8df1ddf1ca5771deb5e5133d7a3855a1d2f950f0db4807.scope - libcontainer container 14d930f81cc75f138e8df1ddf1ca5771deb5e5133d7a3855a1d2f950f0db4807. Feb 13 20:26:11.799338 containerd[1529]: time="2025-02-13T20:26:11.799183422Z" level=info msg="StartContainer for \"14d930f81cc75f138e8df1ddf1ca5771deb5e5133d7a3855a1d2f950f0db4807\" returns successfully" Feb 13 20:26:12.733967 kubelet[2732]: I0213 20:26:12.733326 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fdd49f766-hwtb2" podStartSLOduration=32.739168785 podStartE2EDuration="39.733316167s" podCreationTimestamp="2025-02-13 20:25:33 +0000 UTC" firstStartedPulling="2025-02-13 20:26:04.542294154 +0000 UTC m=+41.854417218" lastFinishedPulling="2025-02-13 20:26:11.536441532 +0000 UTC m=+48.848564600" observedRunningTime="2025-02-13 20:26:12.733166256 +0000 UTC m=+50.045289320" watchObservedRunningTime="2025-02-13 20:26:12.733316167 +0000 UTC m=+50.045439240" Feb 13 20:26:13.733647 containerd[1529]: time="2025-02-13T20:26:13.732833965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:13.733647 containerd[1529]: time="2025-02-13T20:26:13.733246956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:26:13.733647 containerd[1529]: time="2025-02-13T20:26:13.733308409Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:13.739744 containerd[1529]: time="2025-02-13T20:26:13.739710737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:26:13.741670 containerd[1529]: time="2025-02-13T20:26:13.741428979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.204759961s" Feb 13 20:26:13.741670 containerd[1529]: time="2025-02-13T20:26:13.741453199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:26:13.745376 containerd[1529]: time="2025-02-13T20:26:13.745047409Z" level=info msg="CreateContainer within sandbox \"c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:26:13.787516 containerd[1529]: time="2025-02-13T20:26:13.787479683Z" level=info msg="CreateContainer within sandbox \"c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cbcfdc1d7169da822c96a6e1b0fd529d842846cf35fa9854df813c0979d3bfaa\"" Feb 13 20:26:13.788824 containerd[1529]: time="2025-02-13T20:26:13.788749060Z" level=info msg="StartContainer for \"cbcfdc1d7169da822c96a6e1b0fd529d842846cf35fa9854df813c0979d3bfaa\"" Feb 13 20:26:13.845713 systemd[1]: Started cri-containerd-cbcfdc1d7169da822c96a6e1b0fd529d842846cf35fa9854df813c0979d3bfaa.scope - libcontainer container cbcfdc1d7169da822c96a6e1b0fd529d842846cf35fa9854df813c0979d3bfaa. Feb 13 20:26:13.867682 containerd[1529]: time="2025-02-13T20:26:13.867475670Z" level=info msg="StartContainer for \"cbcfdc1d7169da822c96a6e1b0fd529d842846cf35fa9854df813c0979d3bfaa\" returns successfully" Feb 13 20:26:14.608819 kubelet[2732]: I0213 20:26:14.607199 2732 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:26:14.623703 kubelet[2732]: I0213 20:26:14.623654 2732 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:26:22.857740 containerd[1529]: time="2025-02-13T20:26:22.857715675Z" level=info msg="StopPodSandbox for \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\"" Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.037 [WARNING][5151] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0", GenerateName:"calico-kube-controllers-67f9bb95f-", Namespace:"calico-system", SelfLink:"", UID:"348324a7-a185-4686-9c4e-5b8f58c85d3b", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f9bb95f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a", Pod:"calico-kube-controllers-67f9bb95f-nhppb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ebde063fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.044 [INFO][5151] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.044 [INFO][5151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" iface="eth0" netns="" Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.044 [INFO][5151] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.044 [INFO][5151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.079 [INFO][5157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" HandleID="k8s-pod-network.c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.079 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.079 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.084 [WARNING][5157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" HandleID="k8s-pod-network.c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.084 [INFO][5157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" HandleID="k8s-pod-network.c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.085 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.088946 containerd[1529]: 2025-02-13 20:26:23.087 [INFO][5151] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:23.091410 containerd[1529]: time="2025-02-13T20:26:23.088978386Z" level=info msg="TearDown network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\" successfully" Feb 13 20:26:23.091410 containerd[1529]: time="2025-02-13T20:26:23.088994635Z" level=info msg="StopPodSandbox for \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\" returns successfully" Feb 13 20:26:23.114668 containerd[1529]: time="2025-02-13T20:26:23.114271984Z" level=info msg="RemovePodSandbox for \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\"" Feb 13 20:26:23.115602 containerd[1529]: time="2025-02-13T20:26:23.115584157Z" level=info msg="Forcibly stopping sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\"" Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.155 [WARNING][5176] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0", GenerateName:"calico-kube-controllers-67f9bb95f-", Namespace:"calico-system", SelfLink:"", UID:"348324a7-a185-4686-9c4e-5b8f58c85d3b", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f9bb95f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd034f36257b1b2171307161d901023d4631d0108370cba13adb256dd7d5ab8a", Pod:"calico-kube-controllers-67f9bb95f-nhppb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ebde063fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.155 [INFO][5176] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.155 [INFO][5176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" iface="eth0" netns="" Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.155 [INFO][5176] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.155 [INFO][5176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.168 [INFO][5182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" HandleID="k8s-pod-network.c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.168 [INFO][5182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.168 [INFO][5182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.172 [WARNING][5182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" HandleID="k8s-pod-network.c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.172 [INFO][5182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" HandleID="k8s-pod-network.c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Workload="localhost-k8s-calico--kube--controllers--67f9bb95f--nhppb-eth0" Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.173 [INFO][5182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.176130 containerd[1529]: 2025-02-13 20:26:23.174 [INFO][5176] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5" Feb 13 20:26:23.178638 containerd[1529]: time="2025-02-13T20:26:23.176205554Z" level=info msg="TearDown network for sandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\" successfully" Feb 13 20:26:23.190987 containerd[1529]: time="2025-02-13T20:26:23.190955470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:26:23.196502 containerd[1529]: time="2025-02-13T20:26:23.196467288Z" level=info msg="RemovePodSandbox \"c5cb268a4b8fe2b4a470047ede498592c187b50b7d473577cd8dbcc7554721e5\" returns successfully" Feb 13 20:26:23.205421 containerd[1529]: time="2025-02-13T20:26:23.205401458Z" level=info msg="StopPodSandbox for \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\"" Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.230 [WARNING][5201] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cntw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b708f66b-cea3-4952-b290-318b6ed2fb1d", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c", Pod:"csi-node-driver-cntw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2528047d1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.230 [INFO][5201] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.230 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" iface="eth0" netns="" Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.230 [INFO][5201] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.230 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.247 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" HandleID="k8s-pod-network.7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.247 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.248 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.253 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" HandleID="k8s-pod-network.7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.253 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" HandleID="k8s-pod-network.7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.254 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.256264 containerd[1529]: 2025-02-13 20:26:23.255 [INFO][5201] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:23.257361 containerd[1529]: time="2025-02-13T20:26:23.256298174Z" level=info msg="TearDown network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\" successfully" Feb 13 20:26:23.257361 containerd[1529]: time="2025-02-13T20:26:23.256314314Z" level=info msg="StopPodSandbox for \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\" returns successfully" Feb 13 20:26:23.257361 containerd[1529]: time="2025-02-13T20:26:23.256669220Z" level=info msg="RemovePodSandbox for \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\"" Feb 13 20:26:23.257361 containerd[1529]: time="2025-02-13T20:26:23.256685827Z" level=info msg="Forcibly stopping sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\"" Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.280 [WARNING][5225] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cntw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b708f66b-cea3-4952-b290-318b6ed2fb1d", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5d92cf540fc5c2b01ca15021d01714eeff859d150280366faba6781ab90011c", Pod:"csi-node-driver-cntw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2528047d1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.280 [INFO][5225] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.280 [INFO][5225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" iface="eth0" netns="" Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.280 [INFO][5225] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.280 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.297 [INFO][5231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" HandleID="k8s-pod-network.7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.297 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.297 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.301 [WARNING][5231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" HandleID="k8s-pod-network.7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.301 [INFO][5231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" HandleID="k8s-pod-network.7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Workload="localhost-k8s-csi--node--driver--cntw8-eth0" Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.301 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.303673 containerd[1529]: 2025-02-13 20:26:23.302 [INFO][5225] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86" Feb 13 20:26:23.304876 containerd[1529]: time="2025-02-13T20:26:23.303711220Z" level=info msg="TearDown network for sandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\" successfully" Feb 13 20:26:23.305127 containerd[1529]: time="2025-02-13T20:26:23.305109288Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:26:23.305172 containerd[1529]: time="2025-02-13T20:26:23.305157839Z" level=info msg="RemovePodSandbox \"7af6168af5bd82afaae9658c55421a0aa86df9a9666205455a1c36e1f1637c86\" returns successfully" Feb 13 20:26:23.305596 containerd[1529]: time="2025-02-13T20:26:23.305527451Z" level=info msg="StopPodSandbox for \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\"" Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.334 [WARNING][5249] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0", GenerateName:"calico-apiserver-5fdd49f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"f69e3431-9b6e-466d-b7b7-3df2f65cb067", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fdd49f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68", Pod:"calico-apiserver-5fdd49f766-8k5nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7725c4e6b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.336 [INFO][5249] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.336 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" iface="eth0" netns="" Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.336 [INFO][5249] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.336 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.350 [INFO][5255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" HandleID="k8s-pod-network.b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.350 [INFO][5255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.350 [INFO][5255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.354 [WARNING][5255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" HandleID="k8s-pod-network.b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.354 [INFO][5255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" HandleID="k8s-pod-network.b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.355 [INFO][5255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.356870 containerd[1529]: 2025-02-13 20:26:23.355 [INFO][5249] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:23.358198 containerd[1529]: time="2025-02-13T20:26:23.356893098Z" level=info msg="TearDown network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\" successfully" Feb 13 20:26:23.358198 containerd[1529]: time="2025-02-13T20:26:23.356924470Z" level=info msg="StopPodSandbox for \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\" returns successfully" Feb 13 20:26:23.358198 containerd[1529]: time="2025-02-13T20:26:23.357201411Z" level=info msg="RemovePodSandbox for \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\"" Feb 13 20:26:23.358198 containerd[1529]: time="2025-02-13T20:26:23.357217638Z" level=info msg="Forcibly stopping sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\"" Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.377 [WARNING][5273] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0", GenerateName:"calico-apiserver-5fdd49f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"f69e3431-9b6e-466d-b7b7-3df2f65cb067", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fdd49f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0423d71bb8889940671afb76c48ea62280986ed6cdb7224bc90335e4c3c26a68", Pod:"calico-apiserver-5fdd49f766-8k5nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7725c4e6b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.377 [INFO][5273] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.377 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" iface="eth0" netns="" Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.377 [INFO][5273] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.377 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.396 [INFO][5279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" HandleID="k8s-pod-network.b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.396 [INFO][5279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.396 [INFO][5279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.400 [WARNING][5279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" HandleID="k8s-pod-network.b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.400 [INFO][5279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" HandleID="k8s-pod-network.b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Workload="localhost-k8s-calico--apiserver--5fdd49f766--8k5nw-eth0" Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.401 [INFO][5279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.403519 containerd[1529]: 2025-02-13 20:26:23.402 [INFO][5273] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5" Feb 13 20:26:23.405350 containerd[1529]: time="2025-02-13T20:26:23.403647472Z" level=info msg="TearDown network for sandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\" successfully" Feb 13 20:26:23.405771 containerd[1529]: time="2025-02-13T20:26:23.405754981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:26:23.405816 containerd[1529]: time="2025-02-13T20:26:23.405790080Z" level=info msg="RemovePodSandbox \"b1ad3efb057c66fb790757e8bc26fb85979da046f47a9e356574683b7139cfd5\" returns successfully" Feb 13 20:26:23.406113 containerd[1529]: time="2025-02-13T20:26:23.406093279Z" level=info msg="StopPodSandbox for \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\"" Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.429 [WARNING][5297] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7d7c602-e34f-4d46-9a2c-8104daaf8055", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77", Pod:"coredns-6f6b679f8f-j7gxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e871bb45ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.429 [INFO][5297] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.429 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" iface="eth0" netns="" Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.429 [INFO][5297] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.429 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.443 [INFO][5303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" HandleID="k8s-pod-network.a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.443 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.443 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.446 [WARNING][5303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" HandleID="k8s-pod-network.a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.447 [INFO][5303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" HandleID="k8s-pod-network.a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.447 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.449767 containerd[1529]: 2025-02-13 20:26:23.448 [INFO][5297] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:23.449767 containerd[1529]: time="2025-02-13T20:26:23.449715192Z" level=info msg="TearDown network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\" successfully" Feb 13 20:26:23.449767 containerd[1529]: time="2025-02-13T20:26:23.449731276Z" level=info msg="StopPodSandbox for \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\" returns successfully" Feb 13 20:26:23.450521 containerd[1529]: time="2025-02-13T20:26:23.450012069Z" level=info msg="RemovePodSandbox for \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\"" Feb 13 20:26:23.450521 containerd[1529]: time="2025-02-13T20:26:23.450026406Z" level=info msg="Forcibly stopping sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\"" Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.470 [WARNING][5321] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7d7c602-e34f-4d46-9a2c-8104daaf8055", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"faa353deaab4584cdd8b8fc95384503e98bde08c307a747e7c21a279a5820e77", Pod:"coredns-6f6b679f8f-j7gxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e871bb45ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.470 [INFO][5321] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.470 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" iface="eth0" netns="" Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.470 [INFO][5321] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.470 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.482 [INFO][5327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" HandleID="k8s-pod-network.a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.482 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.482 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.486 [WARNING][5327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" HandleID="k8s-pod-network.a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.486 [INFO][5327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" HandleID="k8s-pod-network.a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Workload="localhost-k8s-coredns--6f6b679f8f--j7gxr-eth0" Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.486 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.488773 containerd[1529]: 2025-02-13 20:26:23.487 [INFO][5321] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492" Feb 13 20:26:23.488773 containerd[1529]: time="2025-02-13T20:26:23.488727022Z" level=info msg="TearDown network for sandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\" successfully" Feb 13 20:26:23.493334 containerd[1529]: time="2025-02-13T20:26:23.493318010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:26:23.493377 containerd[1529]: time="2025-02-13T20:26:23.493350838Z" level=info msg="RemovePodSandbox \"a416bc4a50d38775d1f2c4837660ee84b3c5eac1feae65bae2669f4c83096492\" returns successfully" Feb 13 20:26:23.493772 containerd[1529]: time="2025-02-13T20:26:23.493660687Z" level=info msg="StopPodSandbox for \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\"" Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.515 [WARNING][5345] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e18627c8-7f32-45f6-9afb-f589410799df", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1", Pod:"coredns-6f6b679f8f-tjrfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ffae4ad6fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.515 [INFO][5345] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.515 [INFO][5345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" iface="eth0" netns="" Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.515 [INFO][5345] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.515 [INFO][5345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.532 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" HandleID="k8s-pod-network.5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.532 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.532 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.536 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" HandleID="k8s-pod-network.5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.536 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" HandleID="k8s-pod-network.5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.536 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.538679 containerd[1529]: 2025-02-13 20:26:23.537 [INFO][5345] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:23.539206 containerd[1529]: time="2025-02-13T20:26:23.539075985Z" level=info msg="TearDown network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\" successfully" Feb 13 20:26:23.539206 containerd[1529]: time="2025-02-13T20:26:23.539092000Z" level=info msg="StopPodSandbox for \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\" returns successfully" Feb 13 20:26:23.539429 containerd[1529]: time="2025-02-13T20:26:23.539413020Z" level=info msg="RemovePodSandbox for \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\"" Feb 13 20:26:23.539464 containerd[1529]: time="2025-02-13T20:26:23.539433252Z" level=info msg="Forcibly stopping sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\"" Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.560 [WARNING][5369] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e18627c8-7f32-45f6-9afb-f589410799df", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a06703f656148ca3e2c1784bf46eeb278bac71e3addf0291a97ba74dd7c748d1", Pod:"coredns-6f6b679f8f-tjrfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ffae4ad6fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.561 [INFO][5369] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.561 [INFO][5369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" iface="eth0" netns="" Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.561 [INFO][5369] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.561 [INFO][5369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.575 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" HandleID="k8s-pod-network.5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.575 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.575 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.579 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" HandleID="k8s-pod-network.5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.579 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" HandleID="k8s-pod-network.5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Workload="localhost-k8s-coredns--6f6b679f8f--tjrfw-eth0" Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.581 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.584845 containerd[1529]: 2025-02-13 20:26:23.582 [INFO][5369] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c" Feb 13 20:26:23.584845 containerd[1529]: time="2025-02-13T20:26:23.584009199Z" level=info msg="TearDown network for sandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\" successfully" Feb 13 20:26:23.585683 containerd[1529]: time="2025-02-13T20:26:23.585669557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:26:23.585754 containerd[1529]: time="2025-02-13T20:26:23.585744558Z" level=info msg="RemovePodSandbox \"5c95aa24f174479787bd04ecfa0daad33a65016c86d5b0d843e70e462aec1e7c\" returns successfully" Feb 13 20:26:23.586055 containerd[1529]: time="2025-02-13T20:26:23.586044063Z" level=info msg="StopPodSandbox for \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\"" Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.623 [WARNING][5393] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0", GenerateName:"calico-apiserver-5fdd49f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e92037f-96bf-409f-a057-890289c6e006", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fdd49f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7", Pod:"calico-apiserver-5fdd49f766-hwtb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11495d770d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.624 [INFO][5393] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.624 [INFO][5393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" iface="eth0" netns="" Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.624 [INFO][5393] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.624 [INFO][5393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.637 [INFO][5400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" HandleID="k8s-pod-network.344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.637 [INFO][5400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.638 [INFO][5400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.642 [WARNING][5400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" HandleID="k8s-pod-network.344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.642 [INFO][5400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" HandleID="k8s-pod-network.344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.643 [INFO][5400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.646104 containerd[1529]: 2025-02-13 20:26:23.644 [INFO][5393] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:23.647147 containerd[1529]: time="2025-02-13T20:26:23.646412922Z" level=info msg="TearDown network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\" successfully" Feb 13 20:26:23.647147 containerd[1529]: time="2025-02-13T20:26:23.646429221Z" level=info msg="StopPodSandbox for \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\" returns successfully" Feb 13 20:26:23.647147 containerd[1529]: time="2025-02-13T20:26:23.646819869Z" level=info msg="RemovePodSandbox for \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\"" Feb 13 20:26:23.647147 containerd[1529]: time="2025-02-13T20:26:23.646835653Z" level=info msg="Forcibly stopping sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\"" Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.669 [WARNING][5418] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0", GenerateName:"calico-apiserver-5fdd49f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e92037f-96bf-409f-a057-890289c6e006", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fdd49f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83a2fc48e3398779a6fabc7a2b5d69b9a27c3985315550c893795f0e1ecdbaf7", Pod:"calico-apiserver-5fdd49f766-hwtb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11495d770d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.669 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.669 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" iface="eth0" netns="" Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.669 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.669 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.683 [INFO][5425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" HandleID="k8s-pod-network.344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.683 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.683 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.687 [WARNING][5425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" HandleID="k8s-pod-network.344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.687 [INFO][5425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" HandleID="k8s-pod-network.344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Workload="localhost-k8s-calico--apiserver--5fdd49f766--hwtb2-eth0" Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.687 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:26:23.689831 containerd[1529]: 2025-02-13 20:26:23.688 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977" Feb 13 20:26:23.689831 containerd[1529]: time="2025-02-13T20:26:23.689814504Z" level=info msg="TearDown network for sandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\" successfully" Feb 13 20:26:23.693174 containerd[1529]: time="2025-02-13T20:26:23.691647397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:26:23.693174 containerd[1529]: time="2025-02-13T20:26:23.691674856Z" level=info msg="RemovePodSandbox \"344bef8bb4fa24510ea1f21a658369d659d883abb3062fb53f57d21f4c614977\" returns successfully" Feb 13 20:26:45.923781 kubelet[2732]: I0213 20:26:45.923407 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:26:45.926151 kubelet[2732]: I0213 20:26:45.925787 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cntw8" podStartSLOduration=63.198672237 podStartE2EDuration="1m12.901681985s" podCreationTimestamp="2025-02-13 20:25:33 +0000 UTC" firstStartedPulling="2025-02-13 20:26:04.040282984 +0000 UTC m=+41.352406049" lastFinishedPulling="2025-02-13 20:26:13.743292733 +0000 UTC m=+51.055415797" observedRunningTime="2025-02-13 20:26:14.613009157 +0000 UTC m=+51.925132230" watchObservedRunningTime="2025-02-13 20:26:45.901681985 +0000 UTC m=+83.213805053" Feb 13 20:26:48.295739 systemd[1]: Started sshd@7-139.178.70.108:22-139.178.89.65:51332.service - OpenSSH per-connection server daemon (139.178.89.65:51332). Feb 13 20:26:48.393399 sshd[5512]: Accepted publickey for core from 139.178.89.65 port 51332 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:26:48.395853 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:48.399917 systemd-logind[1510]: New session 10 of user core. Feb 13 20:26:48.403730 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:26:48.911780 sshd[5512]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:48.914733 systemd[1]: sshd@7-139.178.70.108:22-139.178.89.65:51332.service: Deactivated successfully. Feb 13 20:26:48.919290 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:26:48.920022 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:26:48.920782 systemd-logind[1510]: Removed session 10. Feb 13 20:26:53.921939 systemd[1]: Started sshd@8-139.178.70.108:22-139.178.89.65:51334.service - OpenSSH per-connection server daemon (139.178.89.65:51334). Feb 13 20:26:53.985045 sshd[5527]: Accepted publickey for core from 139.178.89.65 port 51334 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:26:53.986371 sshd[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:53.989444 systemd-logind[1510]: New session 11 of user core. Feb 13 20:26:53.996941 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:26:54.105338 sshd[5527]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:54.107546 systemd[1]: sshd@8-139.178.70.108:22-139.178.89.65:51334.service: Deactivated successfully. Feb 13 20:26:54.108961 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:26:54.109454 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:26:54.110117 systemd-logind[1510]: Removed session 11. Feb 13 20:26:59.115541 systemd[1]: Started sshd@9-139.178.70.108:22-139.178.89.65:49336.service - OpenSSH per-connection server daemon (139.178.89.65:49336). Feb 13 20:26:59.284583 sshd[5542]: Accepted publickey for core from 139.178.89.65 port 49336 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:26:59.287163 sshd[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:59.291484 systemd-logind[1510]: New session 12 of user core. Feb 13 20:26:59.294748 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:26:59.406407 sshd[5542]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:59.424489 systemd[1]: sshd@9-139.178.70.108:22-139.178.89.65:49336.service: Deactivated successfully. Feb 13 20:26:59.425684 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:26:59.426513 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:26:59.427036 systemd-logind[1510]: Removed session 12. Feb 13 20:27:04.416583 systemd[1]: Started sshd@10-139.178.70.108:22-139.178.89.65:49350.service - OpenSSH per-connection server daemon (139.178.89.65:49350). Feb 13 20:27:04.465114 sshd[5576]: Accepted publickey for core from 139.178.89.65 port 49350 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:04.466052 sshd[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:04.468759 systemd-logind[1510]: New session 13 of user core. Feb 13 20:27:04.475009 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:27:04.581291 sshd[5576]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:04.588932 systemd[1]: sshd@10-139.178.70.108:22-139.178.89.65:49350.service: Deactivated successfully. Feb 13 20:27:04.590574 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:27:04.592251 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:27:04.598874 systemd[1]: Started sshd@11-139.178.70.108:22-139.178.89.65:38202.service - OpenSSH per-connection server daemon (139.178.89.65:38202). Feb 13 20:27:04.600666 systemd-logind[1510]: Removed session 13. Feb 13 20:27:04.642023 sshd[5590]: Accepted publickey for core from 139.178.89.65 port 38202 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:04.643140 sshd[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:04.646322 systemd-logind[1510]: New session 14 of user core. Feb 13 20:27:04.653777 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:27:04.838950 sshd[5590]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:04.851068 systemd[1]: sshd@11-139.178.70.108:22-139.178.89.65:38202.service: Deactivated successfully. Feb 13 20:27:04.855671 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:27:04.859599 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:27:04.870882 systemd[1]: Started sshd@12-139.178.70.108:22-139.178.89.65:38212.service - OpenSSH per-connection server daemon (139.178.89.65:38212). Feb 13 20:27:04.872443 systemd-logind[1510]: Removed session 14. Feb 13 20:27:04.933162 sshd[5601]: Accepted publickey for core from 139.178.89.65 port 38212 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:04.933221 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:04.941314 systemd-logind[1510]: New session 15 of user core. Feb 13 20:27:04.945804 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:27:05.122363 sshd[5601]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:05.124486 systemd[1]: sshd@12-139.178.70.108:22-139.178.89.65:38212.service: Deactivated successfully. Feb 13 20:27:05.125524 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:27:05.125923 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:27:05.126502 systemd-logind[1510]: Removed session 15. Feb 13 20:27:10.129333 systemd[1]: Started sshd@13-139.178.70.108:22-139.178.89.65:38224.service - OpenSSH per-connection server daemon (139.178.89.65:38224). Feb 13 20:27:10.311664 sshd[5619]: Accepted publickey for core from 139.178.89.65 port 38224 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:10.312660 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:10.315495 systemd-logind[1510]: New session 16 of user core. Feb 13 20:27:10.318785 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:27:10.417109 sshd[5619]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:10.419424 systemd[1]: sshd@13-139.178.70.108:22-139.178.89.65:38224.service: Deactivated successfully. Feb 13 20:27:10.420805 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:27:10.421374 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:27:10.422149 systemd-logind[1510]: Removed session 16. Feb 13 20:27:15.426973 systemd[1]: Started sshd@14-139.178.70.108:22-139.178.89.65:59752.service - OpenSSH per-connection server daemon (139.178.89.65:59752). Feb 13 20:27:15.520992 sshd[5654]: Accepted publickey for core from 139.178.89.65 port 59752 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:15.524707 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:15.528011 systemd-logind[1510]: New session 17 of user core. Feb 13 20:27:15.532725 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:27:15.645476 sshd[5654]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:15.651927 systemd[1]: sshd@14-139.178.70.108:22-139.178.89.65:59752.service: Deactivated successfully. Feb 13 20:27:15.652040 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:27:15.653262 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:27:15.655555 systemd-logind[1510]: Removed session 17. Feb 13 20:27:20.653317 systemd[1]: Started sshd@15-139.178.70.108:22-139.178.89.65:59756.service - OpenSSH per-connection server daemon (139.178.89.65:59756). Feb 13 20:27:20.683129 sshd[5679]: Accepted publickey for core from 139.178.89.65 port 59756 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:20.684036 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:20.686480 systemd-logind[1510]: New session 18 of user core. Feb 13 20:27:20.691707 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:27:20.825773 sshd[5679]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:20.828021 systemd[1]: sshd@15-139.178.70.108:22-139.178.89.65:59756.service: Deactivated successfully. Feb 13 20:27:20.829282 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:27:20.829808 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:27:20.830341 systemd-logind[1510]: Removed session 18. Feb 13 20:27:25.833859 systemd[1]: Started sshd@16-139.178.70.108:22-139.178.89.65:53570.service - OpenSSH per-connection server daemon (139.178.89.65:53570). Feb 13 20:27:25.868175 sshd[5713]: Accepted publickey for core from 139.178.89.65 port 53570 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:25.869013 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:25.871640 systemd-logind[1510]: New session 19 of user core. Feb 13 20:27:25.879719 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:27:25.977347 sshd[5713]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:25.983047 systemd[1]: Started sshd@17-139.178.70.108:22-139.178.89.65:53584.service - OpenSSH per-connection server daemon (139.178.89.65:53584). Feb 13 20:27:25.984965 systemd[1]: sshd@16-139.178.70.108:22-139.178.89.65:53570.service: Deactivated successfully. Feb 13 20:27:25.985938 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:27:25.986688 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:27:25.987353 systemd-logind[1510]: Removed session 19. Feb 13 20:27:26.028685 sshd[5724]: Accepted publickey for core from 139.178.89.65 port 53584 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:26.029495 sshd[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:26.034427 systemd-logind[1510]: New session 20 of user core. Feb 13 20:27:26.038708 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:27:26.421022 sshd[5724]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:26.426060 systemd[1]: Started sshd@18-139.178.70.108:22-139.178.89.65:53600.service - OpenSSH per-connection server daemon (139.178.89.65:53600). Feb 13 20:27:26.430204 systemd[1]: sshd@17-139.178.70.108:22-139.178.89.65:53584.service: Deactivated successfully. Feb 13 20:27:26.432507 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:27:26.433865 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:27:26.435560 systemd-logind[1510]: Removed session 20. Feb 13 20:27:26.481265 sshd[5737]: Accepted publickey for core from 139.178.89.65 port 53600 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:26.482093 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:26.485316 systemd-logind[1510]: New session 21 of user core. Feb 13 20:27:26.492721 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:27:28.102414 systemd[1]: Started sshd@19-139.178.70.108:22-139.178.89.65:53602.service - OpenSSH per-connection server daemon (139.178.89.65:53602). Feb 13 20:27:28.105024 sshd[5737]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:28.113636 systemd[1]: sshd@18-139.178.70.108:22-139.178.89.65:53600.service: Deactivated successfully. Feb 13 20:27:28.115026 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:27:28.116241 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:27:28.117038 systemd-logind[1510]: Removed session 21. Feb 13 20:27:28.222458 sshd[5761]: Accepted publickey for core from 139.178.89.65 port 53602 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:28.224914 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:28.231331 systemd-logind[1510]: New session 22 of user core. Feb 13 20:27:28.236720 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:27:28.809075 sshd[5761]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:28.816467 systemd[1]: sshd@19-139.178.70.108:22-139.178.89.65:53602.service: Deactivated successfully. Feb 13 20:27:28.818593 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:27:28.819926 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:27:28.825886 systemd[1]: Started sshd@20-139.178.70.108:22-139.178.89.65:53616.service - OpenSSH per-connection server daemon (139.178.89.65:53616). Feb 13 20:27:28.828898 systemd-logind[1510]: Removed session 22. Feb 13 20:27:28.861422 sshd[5779]: Accepted publickey for core from 139.178.89.65 port 53616 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:28.862339 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:28.865582 systemd-logind[1510]: New session 23 of user core. Feb 13 20:27:28.869701 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:27:28.963277 sshd[5779]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:28.965503 systemd[1]: sshd@20-139.178.70.108:22-139.178.89.65:53616.service: Deactivated successfully. Feb 13 20:27:28.966767 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:27:28.967354 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:27:28.968303 systemd-logind[1510]: Removed session 23. Feb 13 20:27:33.971304 systemd[1]: Started sshd@21-139.178.70.108:22-139.178.89.65:53632.service - OpenSSH per-connection server daemon (139.178.89.65:53632). Feb 13 20:27:34.043650 sshd[5825]: Accepted publickey for core from 139.178.89.65 port 53632 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:34.045001 sshd[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:34.051653 systemd-logind[1510]: New session 24 of user core. Feb 13 20:27:34.057764 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:27:34.176964 sshd[5825]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:34.178562 systemd-logind[1510]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:27:34.178776 systemd[1]: sshd@21-139.178.70.108:22-139.178.89.65:53632.service: Deactivated successfully. Feb 13 20:27:34.179782 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:27:34.180737 systemd-logind[1510]: Removed session 24. Feb 13 20:27:39.184587 systemd[1]: Started sshd@22-139.178.70.108:22-139.178.89.65:47110.service - OpenSSH per-connection server daemon (139.178.89.65:47110). Feb 13 20:27:39.291886 sshd[5839]: Accepted publickey for core from 139.178.89.65 port 47110 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:39.293248 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:39.296185 systemd-logind[1510]: New session 25 of user core. Feb 13 20:27:39.303793 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:27:39.487982 sshd[5839]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:39.490710 systemd[1]: sshd@22-139.178.70.108:22-139.178.89.65:47110.service: Deactivated successfully. Feb 13 20:27:39.492109 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:27:39.492854 systemd-logind[1510]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:27:39.493595 systemd-logind[1510]: Removed session 25. Feb 13 20:27:44.499932 systemd[1]: Started sshd@23-139.178.70.108:22-139.178.89.65:54760.service - OpenSSH per-connection server daemon (139.178.89.65:54760). Feb 13 20:27:44.784574 sshd[5875]: Accepted publickey for core from 139.178.89.65 port 54760 ssh2: RSA SHA256:9GiDz6PYf9BUbUtw2AYTxTxyzaL5fWlRGQKqretummk Feb 13 20:27:44.785523 sshd[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:44.788101 systemd-logind[1510]: New session 26 of user core. Feb 13 20:27:44.802942 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:27:45.297384 sshd[5875]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:45.299627 systemd[1]: sshd@23-139.178.70.108:22-139.178.89.65:54760.service: Deactivated successfully. Feb 13 20:27:45.300900 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:27:45.301502 systemd-logind[1510]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:27:45.302074 systemd-logind[1510]: Removed session 26.