Jul 6 23:59:40.742789 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:59:40.742806 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:40.742812 kernel: Disabled fast string operations Jul 6 23:59:40.742816 kernel: BIOS-provided physical RAM map: Jul 6 23:59:40.742820 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 6 23:59:40.742824 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 6 23:59:40.742830 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 6 23:59:40.742834 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 6 23:59:40.742838 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 6 23:59:40.742842 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 6 23:59:40.742847 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 6 23:59:40.742851 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 6 23:59:40.742855 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 6 23:59:40.742859 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 6 23:59:40.742865 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 6 23:59:40.742870 kernel: NX (Execute Disable) protection: active Jul 6 23:59:40.742874 kernel: APIC: Static calls initialized Jul 6 23:59:40.742879 kernel: SMBIOS 2.7 present. Jul 6 23:59:40.742884 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 6 23:59:40.742888 kernel: vmware: hypercall mode: 0x00 Jul 6 23:59:40.742893 kernel: Hypervisor detected: VMware Jul 6 23:59:40.742897 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 6 23:59:40.742903 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 6 23:59:40.742907 kernel: vmware: using clock offset of 2579820856 ns Jul 6 23:59:40.742912 kernel: tsc: Detected 3408.000 MHz processor Jul 6 23:59:40.742917 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:59:40.742922 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:59:40.742927 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 6 23:59:40.742932 kernel: total RAM covered: 3072M Jul 6 23:59:40.742936 kernel: Found optimal setting for mtrr clean up Jul 6 23:59:40.742943 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 6 23:59:40.742949 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 6 23:59:40.742954 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:59:40.742958 kernel: Using GB pages for direct mapping Jul 6 23:59:40.742963 kernel: ACPI: Early table checksum verification disabled Jul 6 23:59:40.742968 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 6 23:59:40.742972 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 6 23:59:40.742977 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 6 23:59:40.742982 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 6 23:59:40.742987 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 6 23:59:40.742994 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 6 23:59:40.742999 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 6 23:59:40.743005 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 6 23:59:40.743010 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 6 23:59:40.743015 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 6 23:59:40.743021 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 6 23:59:40.743026 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 6 23:59:40.743031 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 6 23:59:40.743036 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 6 23:59:40.743041 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 6 23:59:40.743046 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 6 23:59:40.743051 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 6 23:59:40.743056 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 6 23:59:40.743061 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 6 23:59:40.743066 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 6 23:59:40.743072 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 6 23:59:40.743076 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 6 23:59:40.743081 kernel: system APIC only can use physical flat Jul 6 23:59:40.743086 kernel: APIC: Switched APIC routing to: physical flat Jul 6 23:59:40.743091 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:59:40.743096 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 6 23:59:40.743101 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 6 23:59:40.743106 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 6 23:59:40.743111 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 6 23:59:40.743117 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 6 23:59:40.743121 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 6 23:59:40.743126 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 6 23:59:40.743131 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 6 23:59:40.743136 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 6 23:59:40.743141 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 6 23:59:40.743146 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 6 23:59:40.743151 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 6 23:59:40.743155 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 6 23:59:40.743160 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 6 23:59:40.743166 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 6 23:59:40.743171 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 6 23:59:40.743176 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 6 23:59:40.743180 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 6 23:59:40.743185 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 6 23:59:40.743190 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 6 23:59:40.743195 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 6 23:59:40.743200 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 6 23:59:40.743204 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 6 23:59:40.743209 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 6 23:59:40.743215 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 6 23:59:40.743220 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 6 23:59:40.743225 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 6 23:59:40.743264 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 6 23:59:40.743271 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 6 23:59:40.743276 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 6 23:59:40.743280 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 6 23:59:40.743285 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 6 23:59:40.743290 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 6 23:59:40.743295 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 6 23:59:40.743302 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 6 23:59:40.743307 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 6 23:59:40.743312 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 6 23:59:40.743317 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 6 23:59:40.743322 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 6 23:59:40.743326 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 6 23:59:40.743331 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 6 23:59:40.743336 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 6 23:59:40.743341 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 6 23:59:40.743346 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 6 23:59:40.743352 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 6 23:59:40.743357 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 6 23:59:40.743362 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 6 23:59:40.743367 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 6 23:59:40.743371 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 6 23:59:40.743376 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 6 23:59:40.743381 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 6 23:59:40.743386 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 6 23:59:40.743390 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 6 23:59:40.743395 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 6 23:59:40.743400 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 6 23:59:40.743406 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 6 23:59:40.743411 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 6 23:59:40.743416 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 6 23:59:40.743425 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 6 23:59:40.743430 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 6 23:59:40.743436 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 6 23:59:40.743441 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 6 23:59:40.743446 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 6 23:59:40.743452 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 6 23:59:40.743457 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 6 23:59:40.743462 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 6 23:59:40.743468 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 6 23:59:40.743473 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 6 23:59:40.743478 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 6 23:59:40.743483 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 6 23:59:40.743488 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 6 23:59:40.743493 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 6 23:59:40.743498 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 6 23:59:40.743505 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 6 23:59:40.743510 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 6 23:59:40.743515 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 6 23:59:40.743520 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 6 23:59:40.743526 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 6 23:59:40.743531 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 6 23:59:40.743536 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 6 23:59:40.743541 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 6 23:59:40.743546 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 6 23:59:40.743551 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 6 23:59:40.743557 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 6 23:59:40.743563 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 6 23:59:40.743568 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 6 23:59:40.743573 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 6 23:59:40.743578 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 6 23:59:40.743583 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 6 23:59:40.743588 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 6 23:59:40.743593 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 6 23:59:40.743598 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 6 23:59:40.743604 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 6 23:59:40.743610 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 6 23:59:40.743615 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 6 23:59:40.743620 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 6 23:59:40.743625 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 6 23:59:40.743630 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 6 23:59:40.743635 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 6 23:59:40.743641 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 6 23:59:40.743646 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 6 23:59:40.743651 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 6 23:59:40.743656 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 6 23:59:40.743662 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 6 23:59:40.743667 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 6 23:59:40.743672 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 6 23:59:40.743678 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 6 23:59:40.743683 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 6 23:59:40.743688 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 6 23:59:40.743693 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 6 23:59:40.743698 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 6 23:59:40.743704 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 6 23:59:40.743709 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 6 23:59:40.743714 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 6 23:59:40.743720 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 6 23:59:40.743725 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 6 23:59:40.743730 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 6 23:59:40.743735 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 6 23:59:40.743741 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 6 23:59:40.743746 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 6 23:59:40.743751 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 6 23:59:40.743756 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 6 23:59:40.743761 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 6 23:59:40.743766 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 6 23:59:40.743773 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 6 23:59:40.743778 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 6 23:59:40.743783 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 6 23:59:40.743788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 6 23:59:40.743794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 6 23:59:40.743799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 6 23:59:40.743805 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 6 23:59:40.743810 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 6 23:59:40.743816 kernel: Zone ranges: Jul 6 23:59:40.743822 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:59:40.743827 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 6 23:59:40.743833 kernel: Normal empty Jul 6 23:59:40.743838 kernel: Movable zone start for each node Jul 6 23:59:40.743843 kernel: Early memory node ranges Jul 6 23:59:40.743849 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 6 23:59:40.743854 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 6 23:59:40.743859 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 6 23:59:40.743865 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 6 23:59:40.743870 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:59:40.743876 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 6 23:59:40.743881 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 6 23:59:40.743887 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 6 23:59:40.743892 kernel: system APIC only can use physical flat Jul 6 23:59:40.743897 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 6 23:59:40.743903 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 6 23:59:40.743908 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 6 23:59:40.743913 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 6 23:59:40.743918 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 6 23:59:40.743924 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 6 23:59:40.743930 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 6 23:59:40.743935 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 6 23:59:40.743941 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 6 23:59:40.743946 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 6 23:59:40.743951 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 6 23:59:40.743956 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 6 23:59:40.743961 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 6 23:59:40.743967 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 6 23:59:40.743972 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 6 23:59:40.743978 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 6 23:59:40.743983 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 6 23:59:40.743988 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 6 23:59:40.743994 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 6 23:59:40.743999 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 6 23:59:40.744004 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 6 23:59:40.744009 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 6 23:59:40.744015 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 6 23:59:40.744020 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 6 23:59:40.744026 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 6 23:59:40.744031 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 6 23:59:40.744037 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 6 23:59:40.744042 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 6 23:59:40.744047 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 6 23:59:40.744052 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 6 23:59:40.744057 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 6 23:59:40.744062 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 6 23:59:40.744068 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 6 23:59:40.744073 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 6 23:59:40.744079 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 6 23:59:40.744084 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 6 23:59:40.744090 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 6 23:59:40.744095 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 6 23:59:40.744100 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 6 23:59:40.744106 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 6 23:59:40.744111 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 6 23:59:40.744116 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 6 23:59:40.744121 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 6 23:59:40.744127 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 6 23:59:40.744132 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 6 23:59:40.744138 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 6 23:59:40.744143 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 6 23:59:40.744148 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 6 23:59:40.744153 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 6 23:59:40.744159 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 6 23:59:40.744164 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 6 23:59:40.744169 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 6 23:59:40.744174 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 6 23:59:40.744181 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 6 23:59:40.744186 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 6 23:59:40.744191 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 6 23:59:40.744196 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 6 23:59:40.744202 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 6 23:59:40.744207 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 6 23:59:40.744212 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 6 23:59:40.744217 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 6 23:59:40.744222 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 6 23:59:40.744235 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 6 23:59:40.744246 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 6 23:59:40.744252 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 6 23:59:40.744257 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 6 23:59:40.744262 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 6 23:59:40.744267 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 6 23:59:40.744273 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 6 23:59:40.744278 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 6 23:59:40.744283 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 6 23:59:40.744288 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 6 23:59:40.744295 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 6 23:59:40.744300 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 6 23:59:40.744305 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 6 23:59:40.744311 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 6 23:59:40.744316 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 6 23:59:40.744321 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 6 23:59:40.744326 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 6 23:59:40.744331 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 6 23:59:40.744337 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 6 23:59:40.744343 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 6 23:59:40.744348 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 6 23:59:40.744353 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 6 23:59:40.744358 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 6 23:59:40.744363 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 6 23:59:40.744369 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 6 23:59:40.744374 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 6 23:59:40.744379 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 6 23:59:40.744384 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 6 23:59:40.744389 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 6 23:59:40.744396 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 6 23:59:40.744401 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 6 23:59:40.744406 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 6 23:59:40.744411 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 6 23:59:40.744416 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 6 23:59:40.744422 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 6 23:59:40.744427 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 6 23:59:40.744432 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 6 23:59:40.744437 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 6 23:59:40.744442 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 6 23:59:40.744449 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 6 23:59:40.744454 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 6 23:59:40.744459 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 6 23:59:40.744465 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 6 23:59:40.744470 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 6 23:59:40.744475 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 6 23:59:40.744480 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 6 23:59:40.744485 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 6 23:59:40.744490 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 6 23:59:40.744497 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 6 23:59:40.744502 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 6 23:59:40.744507 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 6 23:59:40.744512 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 6 23:59:40.744518 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 6 23:59:40.744523 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 6 23:59:40.744528 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 6 23:59:40.744533 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 6 23:59:40.744539 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 6 23:59:40.744544 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 6 23:59:40.744550 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 6 23:59:40.744555 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 6 23:59:40.744561 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 6 23:59:40.744566 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 6 23:59:40.744571 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 6 23:59:40.744576 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 6 23:59:40.744582 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 6 23:59:40.744587 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 6 23:59:40.744592 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:59:40.744598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 6 23:59:40.744604 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:59:40.744609 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 6 23:59:40.744614 kernel: TSC deadline timer available Jul 6 23:59:40.744620 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 6 23:59:40.744625 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 6 23:59:40.744631 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 6 23:59:40.744636 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:59:40.744642 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 6 23:59:40.744648 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 6 23:59:40.744653 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 6 23:59:40.744659 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 6 23:59:40.744664 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 6 23:59:40.744669 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 6 23:59:40.744674 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 6 23:59:40.744680 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 6 23:59:40.744691 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 6 23:59:40.744698 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 6 23:59:40.744705 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 6 23:59:40.744710 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 6 23:59:40.744716 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 6 23:59:40.744721 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 6 23:59:40.744727 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 6 23:59:40.744732 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 6 23:59:40.744738 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 6 23:59:40.744747 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 6 23:59:40.744752 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 6 23:59:40.744760 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:40.744766 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:59:40.744771 kernel: random: crng init done Jul 6 23:59:40.744777 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 6 23:59:40.744783 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 6 23:59:40.744788 kernel: printk: log_buf_len min size: 262144 bytes Jul 6 23:59:40.744794 kernel: printk: log_buf_len: 1048576 bytes Jul 6 23:59:40.744799 kernel: printk: early log buf free: 239648(91%) Jul 6 23:59:40.744806 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:59:40.744812 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:59:40.744817 kernel: Fallback order for Node 0: 0 Jul 6 23:59:40.744823 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 6 23:59:40.744829 kernel: Policy zone: DMA32 Jul 6 23:59:40.744835 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:59:40.744841 kernel: Memory: 1936372K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 159996K reserved, 0K cma-reserved) Jul 6 23:59:40.744848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 6 23:59:40.744854 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:59:40.744859 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:59:40.744865 kernel: Dynamic Preempt: voluntary Jul 6 23:59:40.744871 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:59:40.744877 kernel: rcu: RCU event tracing is enabled. Jul 6 23:59:40.744882 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 6 23:59:40.744888 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:59:40.744895 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:59:40.744900 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:59:40.744906 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:59:40.744912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 6 23:59:40.744917 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 6 23:59:40.744923 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 6 23:59:40.744929 kernel: Console: colour VGA+ 80x25 Jul 6 23:59:40.744934 kernel: printk: console [tty0] enabled Jul 6 23:59:40.744940 kernel: printk: console [ttyS0] enabled Jul 6 23:59:40.744947 kernel: ACPI: Core revision 20230628 Jul 6 23:59:40.744952 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 6 23:59:40.744958 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:59:40.744964 kernel: x2apic enabled Jul 6 23:59:40.744970 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:59:40.744975 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:59:40.744981 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 6 23:59:40.744987 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 6 23:59:40.744993 kernel: Disabled fast string operations Jul 6 23:59:40.744999 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:59:40.745005 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:59:40.745011 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:59:40.745017 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 6 23:59:40.745022 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 6 23:59:40.745028 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 6 23:59:40.745034 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 6 23:59:40.745040 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 6 23:59:40.745045 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:59:40.745052 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:59:40.745058 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:59:40.745064 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 6 23:59:40.745069 kernel: GDS: Unknown: Dependent on hypervisor status Jul 6 23:59:40.745075 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:59:40.745080 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:59:40.745086 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:59:40.745092 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:59:40.745097 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:59:40.745104 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:59:40.745110 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:59:40.745116 kernel: pid_max: default: 131072 minimum: 1024 Jul 6 23:59:40.745121 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:59:40.745127 kernel: landlock: Up and running. Jul 6 23:59:40.745133 kernel: SELinux: Initializing. Jul 6 23:59:40.745138 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:59:40.745144 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:59:40.745150 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 6 23:59:40.745157 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:59:40.745163 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:59:40.745168 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:59:40.745174 kernel: Performance Events: Skylake events, core PMU driver. Jul 6 23:59:40.745180 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 6 23:59:40.745185 kernel: core: CPUID marked event: 'instructions' unavailable Jul 6 23:59:40.745191 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 6 23:59:40.745196 kernel: core: CPUID marked event: 'cache references' unavailable Jul 6 23:59:40.745203 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 6 23:59:40.745208 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 6 23:59:40.745214 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 6 23:59:40.745219 kernel: ... version: 1 Jul 6 23:59:40.745225 kernel: ... bit width: 48 Jul 6 23:59:40.745244 kernel: ... generic registers: 4 Jul 6 23:59:40.745250 kernel: ... value mask: 0000ffffffffffff Jul 6 23:59:40.745256 kernel: ... max period: 000000007fffffff Jul 6 23:59:40.745262 kernel: ... fixed-purpose events: 0 Jul 6 23:59:40.745269 kernel: ... event mask: 000000000000000f Jul 6 23:59:40.745275 kernel: signal: max sigframe size: 1776 Jul 6 23:59:40.745281 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:59:40.745287 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:59:40.745292 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:59:40.745298 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:59:40.745303 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:59:40.745309 kernel: .... node #0, CPUs: #1 Jul 6 23:59:40.745315 kernel: Disabled fast string operations Jul 6 23:59:40.745320 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 6 23:59:40.745327 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 6 23:59:40.745332 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:59:40.745338 kernel: smpboot: Max logical packages: 128 Jul 6 23:59:40.745344 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 6 23:59:40.745350 kernel: devtmpfs: initialized Jul 6 23:59:40.745356 kernel: x86/mm: Memory block size: 128MB Jul 6 23:59:40.745361 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 6 23:59:40.745367 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:59:40.745373 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 6 23:59:40.745380 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:59:40.745385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:59:40.745391 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:59:40.745397 kernel: audit: type=2000 audit(1751846379.086:1): state=initialized audit_enabled=0 res=1 Jul 6 23:59:40.745402 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:59:40.745408 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:59:40.745413 kernel: cpuidle: using governor menu Jul 6 23:59:40.745419 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 6 23:59:40.745425 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:59:40.745431 kernel: dca service started, version 1.12.1 Jul 6 23:59:40.745437 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 6 23:59:40.745443 kernel: PCI: Using configuration type 1 for base access Jul 6 23:59:40.745449 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:59:40.745455 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:59:40.745460 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:59:40.745472 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:59:40.745477 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:59:40.745483 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:59:40.745490 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:59:40.745496 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:59:40.745501 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:59:40.745507 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 6 23:59:40.745513 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:59:40.745518 kernel: ACPI: Interpreter enabled Jul 6 23:59:40.745524 kernel: ACPI: PM: (supports S0 S1 S5) Jul 6 23:59:40.745529 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:59:40.745535 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:59:40.745542 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:59:40.745548 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 6 23:59:40.745553 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 6 23:59:40.745634 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:59:40.745693 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 6 23:59:40.745747 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 6 23:59:40.745776 kernel: PCI host bridge to bus 0000:00 Jul 6 23:59:40.745847 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:59:40.745895 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 6 23:59:40.745939 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:59:40.745984 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:59:40.746028 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 6 23:59:40.746072 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 6 23:59:40.746129 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 6 23:59:40.746188 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 6 23:59:40.746254 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 6 23:59:40.746309 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 6 23:59:40.746359 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 6 23:59:40.746408 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 6 23:59:40.746458 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 6 23:59:40.746511 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 6 23:59:40.746561 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 6 23:59:40.746615 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 6 23:59:40.746665 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 6 23:59:40.746718 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 6 23:59:40.746773 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 6 23:59:40.746826 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 6 23:59:40.746876 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 6 23:59:40.746929 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 6 23:59:40.746979 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 6 23:59:40.747041 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 6 23:59:40.747094 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 6 23:59:40.747142 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 6 23:59:40.747194 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:59:40.747263 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 6 23:59:40.747319 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747369 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747423 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747474 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747527 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747581 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747635 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747685 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747741 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747828 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747884 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747938 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747991 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748041 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748094 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748144 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748198 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748593 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748657 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748710 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748774 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748827 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748886 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748936 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748990 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749040 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749096 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749181 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749251 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749303 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749356 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749407 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749461 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749511 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749564 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749618 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749671 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749722 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749776 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749826 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749881 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749935 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749988 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750040 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750096 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750147 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750201 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750270 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750327 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750378 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750432 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750483 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750537 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750591 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750645 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750696 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750751 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750802 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750855 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750906 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750964 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.751015 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.751070 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.751121 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.751173 kernel: pci_bus 0000:01: extended config space not accessible Jul 6 23:59:40.751226 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:59:40.751306 kernel: pci_bus 0000:02: extended config space not accessible Jul 6 23:59:40.751315 kernel: acpiphp: Slot [32] registered Jul 6 23:59:40.751321 kernel: acpiphp: Slot [33] registered Jul 6 23:59:40.751327 kernel: acpiphp: Slot [34] registered Jul 6 23:59:40.751332 kernel: acpiphp: Slot [35] registered Jul 6 23:59:40.751338 kernel: acpiphp: Slot [36] registered Jul 6 23:59:40.751344 kernel: acpiphp: Slot [37] registered Jul 6 23:59:40.751349 kernel: acpiphp: Slot [38] registered Jul 6 23:59:40.751355 kernel: acpiphp: Slot [39] registered Jul 6 23:59:40.751363 kernel: acpiphp: Slot [40] registered Jul 6 23:59:40.751368 kernel: acpiphp: Slot [41] registered Jul 6 23:59:40.751374 kernel: acpiphp: Slot [42] registered Jul 6 23:59:40.751380 kernel: acpiphp: Slot [43] registered Jul 6 23:59:40.751385 kernel: acpiphp: Slot [44] registered Jul 6 23:59:40.751391 kernel: acpiphp: Slot [45] registered Jul 6 23:59:40.751397 kernel: acpiphp: Slot [46] registered Jul 6 23:59:40.751402 kernel: acpiphp: Slot [47] registered Jul 6 23:59:40.751408 kernel: acpiphp: Slot [48] registered Jul 6 23:59:40.751414 kernel: acpiphp: Slot [49] registered Jul 6 23:59:40.751420 kernel: acpiphp: Slot [50] registered Jul 6 23:59:40.751426 kernel: acpiphp: Slot [51] registered Jul 6 23:59:40.751431 kernel: acpiphp: Slot [52] registered Jul 6 23:59:40.751437 kernel: acpiphp: Slot [53] registered Jul 6 23:59:40.751443 kernel: acpiphp: Slot [54] registered Jul 6 23:59:40.751448 kernel: acpiphp: Slot [55] registered Jul 6 23:59:40.751454 kernel: acpiphp: Slot [56] registered Jul 6 23:59:40.751459 kernel: acpiphp: Slot [57] registered Jul 6 23:59:40.751465 kernel: acpiphp: Slot [58] registered Jul 6 23:59:40.751472 kernel: acpiphp: Slot [59] registered Jul 6 23:59:40.751477 kernel: acpiphp: Slot [60] registered Jul 6 23:59:40.751483 kernel: acpiphp: Slot [61] registered Jul 6 23:59:40.751489 kernel: acpiphp: Slot [62] registered Jul 6 23:59:40.751494 kernel: acpiphp: Slot [63] registered Jul 6 23:59:40.751544 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 6 23:59:40.751596 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 6 23:59:40.753242 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 6 23:59:40.753311 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:59:40.753366 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 6 23:59:40.753418 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 6 23:59:40.753467 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 6 23:59:40.753517 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 6 23:59:40.753567 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 6 23:59:40.753624 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 6 23:59:40.753680 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 6 23:59:40.753732 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 6 23:59:40.753784 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 6 23:59:40.753835 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.753886 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 6 23:59:40.753938 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 6 23:59:40.753989 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 6 23:59:40.754039 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 6 23:59:40.754093 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 6 23:59:40.754144 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 6 23:59:40.754195 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 6 23:59:40.754263 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:59:40.754317 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 6 23:59:40.754367 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 6 23:59:40.754417 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 6 23:59:40.754471 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:59:40.754522 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 6 23:59:40.754572 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 6 23:59:40.754622 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:59:40.754673 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 6 23:59:40.754723 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 6 23:59:40.754778 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:59:40.754831 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 6 23:59:40.754881 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 6 23:59:40.754931 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:59:40.754982 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 6 23:59:40.755033 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 6 23:59:40.755084 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:59:40.755137 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 6 23:59:40.755188 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 6 23:59:40.755251 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:59:40.755310 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 6 23:59:40.755364 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 6 23:59:40.755415 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 6 23:59:40.755466 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 6 23:59:40.755520 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 6 23:59:40.755572 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 6 23:59:40.755623 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 6 23:59:40.755674 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:59:40.755725 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 6 23:59:40.755815 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 6 23:59:40.755865 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 6 23:59:40.755915 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 6 23:59:40.755969 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 6 23:59:40.756019 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 6 23:59:40.756068 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 6 23:59:40.756118 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:59:40.756169 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 6 23:59:40.756220 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 6 23:59:40.756907 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 6 23:59:40.756965 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:59:40.757023 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 6 23:59:40.757074 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 6 23:59:40.757125 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:59:40.757177 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 6 23:59:40.757227 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 6 23:59:40.757294 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:59:40.757345 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 6 23:59:40.757400 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 6 23:59:40.757451 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:59:40.757502 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 6 23:59:40.757553 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 6 23:59:40.757603 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:59:40.757654 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 6 23:59:40.757704 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 6 23:59:40.757754 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:59:40.757808 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 6 23:59:40.757858 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 6 23:59:40.757908 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 6 23:59:40.757957 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:59:40.758009 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 6 23:59:40.758060 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 6 23:59:40.758110 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 6 23:59:40.758160 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:59:40.758214 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 6 23:59:40.758280 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 6 23:59:40.758330 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 6 23:59:40.758380 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:59:40.758431 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 6 23:59:40.758481 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 6 23:59:40.758531 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:59:40.758585 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 6 23:59:40.758635 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 6 23:59:40.758685 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:59:40.758736 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 6 23:59:40.758791 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 6 23:59:40.758841 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:59:40.758892 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 6 23:59:40.758942 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 6 23:59:40.758992 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:59:40.759046 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 6 23:59:40.759097 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 6 23:59:40.759147 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:59:40.759198 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 6 23:59:40.759373 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 6 23:59:40.759426 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 6 23:59:40.759477 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:59:40.759531 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 6 23:59:40.759581 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 6 23:59:40.759632 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 6 23:59:40.759724 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:59:40.759783 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 6 23:59:40.759833 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 6 23:59:40.759883 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:59:40.759934 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 6 23:59:40.759987 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 6 23:59:40.760037 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:59:40.760120 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 6 23:59:40.760174 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 6 23:59:40.760224 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:59:40.760285 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 6 23:59:40.760336 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 6 23:59:40.760387 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:59:40.760441 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 6 23:59:40.760491 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 6 23:59:40.760541 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:59:40.760593 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 6 23:59:40.760643 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 6 23:59:40.760693 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:59:40.760702 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 6 23:59:40.760708 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 6 23:59:40.760716 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 6 23:59:40.760722 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:59:40.760727 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 6 23:59:40.760733 kernel: iommu: Default domain type: Translated Jul 6 23:59:40.760739 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:59:40.760745 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:59:40.760751 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:59:40.760757 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 6 23:59:40.760762 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 6 23:59:40.760811 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 6 23:59:40.760864 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 6 23:59:40.760975 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:59:40.760985 kernel: vgaarb: loaded Jul 6 23:59:40.760992 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 6 23:59:40.760998 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 6 23:59:40.761003 kernel: clocksource: Switched to clocksource tsc-early Jul 6 23:59:40.761009 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:59:40.761015 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:59:40.761023 kernel: pnp: PnP ACPI init Jul 6 23:59:40.761086 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 6 23:59:40.761136 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 6 23:59:40.761182 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 6 23:59:40.761258 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 6 23:59:40.761315 kernel: pnp 00:06: [dma 2] Jul 6 23:59:40.763319 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 6 23:59:40.765304 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 6 23:59:40.765357 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 6 23:59:40.765366 kernel: pnp: PnP ACPI: found 8 devices Jul 6 23:59:40.765373 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:59:40.765379 kernel: NET: Registered PF_INET protocol family Jul 6 23:59:40.765385 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:59:40.765391 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:59:40.765397 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:59:40.765405 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:59:40.765411 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:59:40.765417 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:59:40.765423 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:59:40.765428 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:59:40.765434 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:59:40.765440 kernel: NET: Registered PF_XDP protocol family Jul 6 23:59:40.765493 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:59:40.765550 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 6 23:59:40.765602 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 6 23:59:40.765655 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 6 23:59:40.765706 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 6 23:59:40.765771 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 6 23:59:40.765864 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 6 23:59:40.765918 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 6 23:59:40.765969 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 6 23:59:40.766020 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 6 23:59:40.766070 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 6 23:59:40.766120 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 6 23:59:40.766171 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 6 23:59:40.766224 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 6 23:59:40.766284 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 6 23:59:40.766335 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 6 23:59:40.766387 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 6 23:59:40.766438 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 6 23:59:40.766490 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 6 23:59:40.766544 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 6 23:59:40.766595 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 6 23:59:40.766646 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 6 23:59:40.766697 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 6 23:59:40.766748 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:59:40.766799 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:59:40.766853 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.766903 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.766954 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.767004 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.767055 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.767105 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.767155 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.767206 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.769691 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.769749 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.769803 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.769855 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.769906 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.769957 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770007 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770057 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770112 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770162 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770214 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770280 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770333 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770384 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770434 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770485 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770539 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770590 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770641 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770692 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770742 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770793 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770843 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770893 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770946 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770996 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771046 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771096 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771146 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771196 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771534 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771592 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771649 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771701 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771755 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771807 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771858 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771909 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771960 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772011 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772076 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772154 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772284 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772348 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772424 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772482 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772545 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772605 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772659 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772718 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772788 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772854 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772908 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772969 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773030 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773090 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773143 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773203 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773291 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773344 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773397 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773448 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773498 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773548 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773599 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773651 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773701 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773752 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773803 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773854 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773907 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773957 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.774009 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.774059 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.774110 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.774160 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.774213 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:59:40.774314 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 6 23:59:40.774366 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 6 23:59:40.774419 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 6 23:59:40.774469 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:59:40.774522 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 6 23:59:40.774573 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 6 23:59:40.774626 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 6 23:59:40.774686 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 6 23:59:40.774751 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:59:40.774815 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 6 23:59:40.774872 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 6 23:59:40.774933 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 6 23:59:40.774995 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:59:40.775056 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 6 23:59:40.775110 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 6 23:59:40.775171 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 6 23:59:40.775245 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:59:40.775308 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 6 23:59:40.775371 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 6 23:59:40.775434 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:59:40.775485 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 6 23:59:40.775536 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 6 23:59:40.775586 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:59:40.775640 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 6 23:59:40.775691 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 6 23:59:40.775749 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:59:40.775804 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 6 23:59:40.775855 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 6 23:59:40.775905 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:59:40.775955 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 6 23:59:40.776005 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 6 23:59:40.776056 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:59:40.776110 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 6 23:59:40.776161 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 6 23:59:40.776212 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 6 23:59:40.776312 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 6 23:59:40.776364 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:59:40.776417 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 6 23:59:40.776468 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 6 23:59:40.776518 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 6 23:59:40.776569 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:59:40.776619 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 6 23:59:40.776669 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 6 23:59:40.776719 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 6 23:59:40.776772 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:59:40.776822 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 6 23:59:40.776873 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 6 23:59:40.776925 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:59:40.776976 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 6 23:59:40.777026 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 6 23:59:40.777077 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:59:40.777127 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 6 23:59:40.777177 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 6 23:59:40.777237 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:59:40.777301 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 6 23:59:40.777353 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 6 23:59:40.777403 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:59:40.777453 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 6 23:59:40.777503 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 6 23:59:40.777554 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:59:40.777605 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 6 23:59:40.777656 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 6 23:59:40.777707 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 6 23:59:40.777760 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:59:40.777856 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 6 23:59:40.777981 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 6 23:59:40.778034 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 6 23:59:40.778085 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:59:40.778137 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 6 23:59:40.778187 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 6 23:59:40.780256 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 6 23:59:40.780320 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:59:40.780379 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 6 23:59:40.780432 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 6 23:59:40.780483 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:59:40.780533 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 6 23:59:40.780583 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 6 23:59:40.780634 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:59:40.780685 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 6 23:59:40.780735 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 6 23:59:40.780786 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:59:40.780837 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 6 23:59:40.780890 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 6 23:59:40.780941 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:59:40.780993 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 6 23:59:40.781044 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 6 23:59:40.781094 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:59:40.781146 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 6 23:59:40.781196 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 6 23:59:40.781265 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 6 23:59:40.781318 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:59:40.781373 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 6 23:59:40.781424 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 6 23:59:40.781476 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 6 23:59:40.781527 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:59:40.781578 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 6 23:59:40.781711 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 6 23:59:40.781961 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:59:40.782017 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 6 23:59:40.782092 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 6 23:59:40.782146 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:59:40.782200 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 6 23:59:40.782259 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 6 23:59:40.782311 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:59:40.782361 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 6 23:59:40.782413 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 6 23:59:40.782463 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:59:40.782514 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 6 23:59:40.782566 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 6 23:59:40.782616 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:59:40.782670 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 6 23:59:40.782721 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 6 23:59:40.782772 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:59:40.782822 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 6 23:59:40.782869 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 6 23:59:40.782915 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 6 23:59:40.782959 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 6 23:59:40.783003 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 6 23:59:40.783053 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 6 23:59:40.783103 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 6 23:59:40.783150 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:59:40.783197 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 6 23:59:40.783251 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 6 23:59:40.783299 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 6 23:59:40.783345 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 6 23:59:40.783391 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 6 23:59:40.783446 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 6 23:59:40.783493 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 6 23:59:40.783539 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:59:40.784000 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 6 23:59:40.784054 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 6 23:59:40.784102 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:59:40.784154 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 6 23:59:40.784204 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 6 23:59:40.784391 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:59:40.784445 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 6 23:59:40.784492 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:59:40.784544 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 6 23:59:40.784591 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:59:40.784643 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 6 23:59:40.784734 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:59:40.784818 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 6 23:59:40.784885 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:59:40.784937 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 6 23:59:40.784984 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:59:40.785048 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 6 23:59:40.785095 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 6 23:59:40.785142 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:59:40.785191 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 6 23:59:40.785332 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 6 23:59:40.785384 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:59:40.785439 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 6 23:59:40.785487 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 6 23:59:40.785536 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:59:40.785586 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 6 23:59:40.785632 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:59:40.785684 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 6 23:59:40.785731 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:59:40.785789 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 6 23:59:40.785836 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:59:40.785885 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 6 23:59:40.785932 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:59:40.785981 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 6 23:59:40.786028 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:59:40.786080 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 6 23:59:40.786127 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 6 23:59:40.786173 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:59:40.786223 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 6 23:59:40.786291 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 6 23:59:40.786338 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:59:40.786388 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 6 23:59:40.786438 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 6 23:59:40.786484 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:59:40.786534 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 6 23:59:40.786580 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:59:40.786631 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 6 23:59:40.786678 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:59:40.786733 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 6 23:59:40.786780 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:59:40.786831 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 6 23:59:40.786879 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:59:40.786931 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 6 23:59:40.787070 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:59:40.787127 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 6 23:59:40.787175 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 6 23:59:40.787225 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:59:40.787305 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 6 23:59:40.787352 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 6 23:59:40.787399 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:59:40.787453 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 6 23:59:40.787500 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:59:40.787550 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 6 23:59:40.787597 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:59:40.787648 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 6 23:59:40.787696 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:59:40.787749 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 6 23:59:40.787800 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:59:40.787850 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 6 23:59:40.787899 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:59:40.787949 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 6 23:59:40.787997 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:59:40.788052 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:59:40.788064 kernel: PCI: CLS 32 bytes, default 64 Jul 6 23:59:40.788071 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:59:40.788078 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 6 23:59:40.788085 kernel: clocksource: Switched to clocksource tsc Jul 6 23:59:40.788091 kernel: Initialise system trusted keyrings Jul 6 23:59:40.788097 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:59:40.788103 kernel: Key type asymmetric registered Jul 6 23:59:40.788109 kernel: Asymmetric key parser 'x509' registered Jul 6 23:59:40.788116 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:59:40.788123 kernel: io scheduler mq-deadline registered Jul 6 23:59:40.788128 kernel: io scheduler kyber registered Jul 6 23:59:40.788135 kernel: io scheduler bfq registered Jul 6 23:59:40.788188 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 6 23:59:40.788271 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788325 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 6 23:59:40.788377 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788428 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 6 23:59:40.788483 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788534 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 6 23:59:40.788585 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788637 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 6 23:59:40.788687 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788738 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 6 23:59:40.788797 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788849 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 6 23:59:40.788900 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788952 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 6 23:59:40.789003 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789057 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 6 23:59:40.789108 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789160 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 6 23:59:40.789211 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789275 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 6 23:59:40.789327 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789379 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 6 23:59:40.789435 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789487 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 6 23:59:40.789539 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789590 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 6 23:59:40.789642 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789696 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 6 23:59:40.789748 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789799 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 6 23:59:40.789851 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789903 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 6 23:59:40.789954 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.790009 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 6 23:59:40.790060 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.790111 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 6 23:59:40.790162 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.790213 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 6 23:59:40.791810 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.791876 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 6 23:59:40.791931 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.791985 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 6 23:59:40.792037 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792090 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 6 23:59:40.792142 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792197 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 6 23:59:40.792300 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792354 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 6 23:59:40.792406 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792457 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 6 23:59:40.792509 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792564 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 6 23:59:40.792615 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792667 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 6 23:59:40.792718 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792769 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 6 23:59:40.792824 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792875 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 6 23:59:40.792926 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792977 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 6 23:59:40.793028 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.793079 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 6 23:59:40.793134 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.793143 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:59:40.793150 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:59:40.793156 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:59:40.793162 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 6 23:59:40.793168 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:59:40.793175 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:59:40.793238 kernel: rtc_cmos 00:01: registered as rtc0 Jul 6 23:59:40.793295 kernel: rtc_cmos 00:01: setting system clock to 2025-07-06T23:59:40 UTC (1751846380) Jul 6 23:59:40.793346 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 6 23:59:40.793355 kernel: intel_pstate: CPU model not supported Jul 6 23:59:40.793362 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:59:40.793368 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:59:40.793374 kernel: Segment Routing with IPv6 Jul 6 23:59:40.793380 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:59:40.793389 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:59:40.793395 kernel: Key type dns_resolver registered Jul 6 23:59:40.793401 kernel: IPI shorthand broadcast: enabled Jul 6 23:59:40.793408 kernel: sched_clock: Marking stable (895003788, 220701472)->(1175741609, -60036349) Jul 6 23:59:40.793414 kernel: registered taskstats version 1 Jul 6 23:59:40.793420 kernel: Loading compiled-in X.509 certificates Jul 6 23:59:40.793426 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:59:40.793432 kernel: Key type .fscrypt registered Jul 6 23:59:40.793438 kernel: Key type fscrypt-provisioning registered Jul 6 23:59:40.793446 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:59:40.793452 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:59:40.793458 kernel: ima: No architecture policies found Jul 6 23:59:40.793464 kernel: clk: Disabling unused clocks Jul 6 23:59:40.793470 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:59:40.793476 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:59:40.793482 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:59:40.793489 kernel: Run /init as init process Jul 6 23:59:40.793495 kernel: with arguments: Jul 6 23:59:40.793503 kernel: /init Jul 6 23:59:40.793509 kernel: with environment: Jul 6 23:59:40.793515 kernel: HOME=/ Jul 6 23:59:40.793521 kernel: TERM=linux Jul 6 23:59:40.793527 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:59:40.793534 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:59:40.793542 systemd[1]: Detected virtualization vmware. Jul 6 23:59:40.793548 systemd[1]: Detected architecture x86-64. Jul 6 23:59:40.793556 systemd[1]: Running in initrd. Jul 6 23:59:40.793562 systemd[1]: No hostname configured, using default hostname. Jul 6 23:59:40.793568 systemd[1]: Hostname set to . Jul 6 23:59:40.793574 systemd[1]: Initializing machine ID from random generator. Jul 6 23:59:40.793580 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:59:40.793587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:40.793593 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:40.793599 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:59:40.793607 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:59:40.793613 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:59:40.793619 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:59:40.793627 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:59:40.793633 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:59:40.793640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:40.793646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:40.793653 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:59:40.793660 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:59:40.793666 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:59:40.793673 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:59:40.793679 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:59:40.793685 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:59:40.793691 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:59:40.793698 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:59:40.793706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:40.793712 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:40.793718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:40.793725 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:59:40.793731 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:59:40.793737 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:59:40.793745 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:59:40.793751 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:59:40.793757 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:59:40.793765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:59:40.793771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:40.793789 systemd-journald[215]: Collecting audit messages is disabled. Jul 6 23:59:40.793805 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:59:40.793814 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:40.793820 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:59:40.793827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:59:40.793833 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:40.793841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:40.793848 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:59:40.793854 kernel: Bridge firewalling registered Jul 6 23:59:40.793860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:40.793867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:59:40.793873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:40.793879 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:59:40.793886 systemd-journald[215]: Journal started Jul 6 23:59:40.793901 systemd-journald[215]: Runtime Journal (/run/log/journal/81d115fa061e48cc8370e396b0420ea6) is 4.8M, max 38.6M, 33.8M free. Jul 6 23:59:40.761066 systemd-modules-load[216]: Inserted module 'overlay' Jul 6 23:59:40.787015 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 6 23:59:40.796782 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:59:40.799495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:59:40.799802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:40.802376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:40.805309 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:59:40.806037 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:40.807620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:40.809217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:59:40.815116 dracut-cmdline[245]: dracut-dracut-053 Jul 6 23:59:40.817708 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:40.830057 systemd-resolved[248]: Positive Trust Anchors: Jul 6 23:59:40.830064 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:59:40.830086 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:59:40.832781 systemd-resolved[248]: Defaulting to hostname 'linux'. Jul 6 23:59:40.833336 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:59:40.833646 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:40.861245 kernel: SCSI subsystem initialized Jul 6 23:59:40.868253 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:59:40.875249 kernel: iscsi: registered transport (tcp) Jul 6 23:59:40.890244 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:59:40.890284 kernel: QLogic iSCSI HBA Driver Jul 6 23:59:40.910384 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:59:40.914341 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:59:40.929300 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:59:40.929348 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:59:40.930902 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:59:40.962250 kernel: raid6: avx2x4 gen() 51546 MB/s Jul 6 23:59:40.979256 kernel: raid6: avx2x2 gen() 52532 MB/s Jul 6 23:59:40.996450 kernel: raid6: avx2x1 gen() 44548 MB/s Jul 6 23:59:40.996501 kernel: raid6: using algorithm avx2x2 gen() 52532 MB/s Jul 6 23:59:41.014565 kernel: raid6: .... xor() 30738 MB/s, rmw enabled Jul 6 23:59:41.014613 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:59:41.028247 kernel: xor: automatically using best checksumming function avx Jul 6 23:59:41.127250 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:59:41.132238 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:59:41.136323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:41.143706 systemd-udevd[432]: Using default interface naming scheme 'v255'. Jul 6 23:59:41.146141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:41.153379 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:59:41.160246 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Jul 6 23:59:41.175398 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:59:41.180323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:59:41.252155 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:41.255390 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:59:41.262150 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:59:41.262794 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:59:41.263486 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:41.263683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:59:41.267341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:59:41.276604 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:59:41.318242 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 6 23:59:41.321020 kernel: vmw_pvscsi: using 64bit dma Jul 6 23:59:41.321041 kernel: vmw_pvscsi: max_id: 16 Jul 6 23:59:41.321050 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 6 23:59:41.324308 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 6 23:59:41.324326 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 6 23:59:41.324334 kernel: vmw_pvscsi: using MSI-X Jul 6 23:59:41.326761 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 6 23:59:41.326862 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 6 23:59:41.328284 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 6 23:59:41.340245 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 6 23:59:41.348259 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 6 23:59:41.352257 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 6 23:59:41.360772 kernel: libata version 3.00 loaded. Jul 6 23:59:41.360817 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:59:41.361016 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:59:41.366510 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 6 23:59:41.366613 kernel: scsi host1: ata_piix Jul 6 23:59:41.366697 kernel: scsi host2: ata_piix Jul 6 23:59:41.366760 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 6 23:59:41.366769 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 6 23:59:41.366776 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 6 23:59:41.361092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:41.361325 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:41.361421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:59:41.361488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:41.361589 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:41.378717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:41.389943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:41.394311 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:41.404679 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:41.532254 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 6 23:59:41.537253 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 6 23:59:41.548714 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:59:41.548788 kernel: AES CTR mode by8 optimization enabled Jul 6 23:59:41.552614 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 6 23:59:41.552793 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:59:41.552860 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 6 23:59:41.552923 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 6 23:59:41.553259 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 6 23:59:41.558248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:59:41.559264 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:59:41.562252 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 6 23:59:41.562362 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:59:41.583249 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:59:41.610246 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (481) Jul 6 23:59:41.612158 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 6 23:59:41.617273 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (493) Jul 6 23:59:41.617678 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 6 23:59:41.620537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 6 23:59:41.624754 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 6 23:59:41.625054 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 6 23:59:41.632317 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:59:41.657382 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:59:41.661771 kernel: GPT:disk_guids don't match. Jul 6 23:59:41.661802 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:59:41.661817 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:59:42.667300 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:59:42.668024 disk-uuid[589]: The operation has completed successfully. Jul 6 23:59:42.702611 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:59:42.702683 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:59:42.706333 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:59:42.708470 sh[609]: Success Jul 6 23:59:42.717252 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:59:42.766533 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:59:42.767399 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:59:42.767706 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:59:42.784297 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:59:42.784332 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:42.784342 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:59:42.786797 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:59:42.786809 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:59:42.792247 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:59:42.794430 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:59:42.802352 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 6 23:59:42.803968 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:59:42.820565 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:42.820601 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:42.820610 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:59:42.828250 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:59:42.841893 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:59:42.842288 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:42.845162 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:59:42.848435 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:59:42.868753 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 6 23:59:42.875363 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:59:42.941245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:59:42.948394 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:59:42.961256 systemd-networkd[803]: lo: Link UP Jul 6 23:59:42.961261 systemd-networkd[803]: lo: Gained carrier Jul 6 23:59:42.962322 systemd-networkd[803]: Enumeration completed Jul 6 23:59:42.962889 systemd-networkd[803]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 6 23:59:42.963320 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:59:42.963477 systemd[1]: Reached target network.target - Network. Jul 6 23:59:42.965669 ignition[670]: Ignition 2.19.0 Jul 6 23:59:42.967076 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 6 23:59:42.967202 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 6 23:59:42.965902 ignition[670]: Stage: fetch-offline Jul 6 23:59:42.965935 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:42.965941 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:42.967437 systemd-networkd[803]: ens192: Link UP Jul 6 23:59:42.966001 ignition[670]: parsed url from cmdline: "" Jul 6 23:59:42.967440 systemd-networkd[803]: ens192: Gained carrier Jul 6 23:59:42.966003 ignition[670]: no config URL provided Jul 6 23:59:42.966006 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:59:42.966011 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:59:42.966495 ignition[670]: config successfully fetched Jul 6 23:59:42.966513 ignition[670]: parsing config with SHA512: 4fcd0ca6487191b22d4b14da8cc2493cd529e1f8678e3fc27a3835cf950b1ec126d2d2b0caea4103c15fa7fe3f0388e9c4a994327ae60ee5bf512c2ef227c5a6 Jul 6 23:59:42.970215 unknown[670]: fetched base config from "system" Jul 6 23:59:42.970224 unknown[670]: fetched user config from "vmware" Jul 6 23:59:42.970660 ignition[670]: fetch-offline: fetch-offline passed Jul 6 23:59:42.970703 ignition[670]: Ignition finished successfully Jul 6 23:59:42.971634 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:59:42.971861 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:59:42.976367 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:59:42.984416 ignition[807]: Ignition 2.19.0 Jul 6 23:59:42.984423 ignition[807]: Stage: kargs Jul 6 23:59:42.984523 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:42.984529 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:42.985055 ignition[807]: kargs: kargs passed Jul 6 23:59:42.985080 ignition[807]: Ignition finished successfully Jul 6 23:59:42.986477 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:59:42.990346 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:59:42.998680 ignition[814]: Ignition 2.19.0 Jul 6 23:59:42.998689 ignition[814]: Stage: disks Jul 6 23:59:42.998808 ignition[814]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:42.998815 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:42.999447 ignition[814]: disks: disks passed Jul 6 23:59:42.999491 ignition[814]: Ignition finished successfully Jul 6 23:59:43.000238 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:59:43.000453 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:59:43.000569 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:59:43.000763 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:59:43.000952 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:59:43.001127 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:59:43.007341 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:59:43.153155 systemd-fsck[822]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 6 23:59:43.157766 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:59:43.162346 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:59:43.315387 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:59:43.315871 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:59:43.316365 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:59:43.326380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:59:43.332759 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:59:43.333264 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:59:43.333296 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:59:43.333314 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:59:43.338221 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:59:43.338954 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:59:43.381256 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (830) Jul 6 23:59:43.394184 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:43.394222 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:43.394245 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:59:43.438315 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:59:43.445354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:59:43.606769 initrd-setup-root[854]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:59:43.610963 initrd-setup-root[861]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:59:43.622122 initrd-setup-root[868]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:59:43.629025 initrd-setup-root[875]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:59:43.701303 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:59:43.706320 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:59:43.707785 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:59:43.713262 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:43.726138 ignition[942]: INFO : Ignition 2.19.0 Jul 6 23:59:43.726138 ignition[942]: INFO : Stage: mount Jul 6 23:59:43.726138 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:43.726138 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:43.726138 ignition[942]: INFO : mount: mount passed Jul 6 23:59:43.726138 ignition[942]: INFO : Ignition finished successfully Jul 6 23:59:43.727323 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:59:43.727551 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:59:43.732451 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:59:43.782444 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:59:43.787337 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:59:43.794262 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (956) Jul 6 23:59:43.796612 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:43.796629 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:43.796638 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:59:43.800243 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:59:43.800943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:59:43.814023 ignition[973]: INFO : Ignition 2.19.0 Jul 6 23:59:43.814023 ignition[973]: INFO : Stage: files Jul 6 23:59:43.814513 ignition[973]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:43.814513 ignition[973]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:43.814738 ignition[973]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:59:43.815136 ignition[973]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:59:43.815136 ignition[973]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:59:43.817004 ignition[973]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:59:43.817133 ignition[973]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:59:43.817292 unknown[973]: wrote ssh authorized keys file for user: core Jul 6 23:59:43.817477 ignition[973]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:59:43.823044 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:59:43.823404 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 6 23:59:44.605615 systemd-networkd[803]: ens192: Gained IPv6LL Jul 6 23:59:48.857873 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:59:48.999177 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:59:48.999177 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 6 23:59:49.987644 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:59:50.223615 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:50.223615 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 6 23:59:50.223615 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:59:50.261822 ignition[973]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:59:50.264585 ignition[973]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:59:50.264765 ignition[973]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:59:50.264765 ignition[973]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:59:50.264765 ignition[973]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:59:50.265214 ignition[973]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:59:50.265214 ignition[973]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:59:50.265214 ignition[973]: INFO : files: files passed Jul 6 23:59:50.265214 ignition[973]: INFO : Ignition finished successfully Jul 6 23:59:50.265642 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:59:50.270349 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:59:50.271317 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:59:50.272488 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:59:50.273257 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:59:50.280288 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:50.280288 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:50.280740 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:50.281575 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:59:50.281944 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:59:50.285322 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:59:50.298699 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:59:50.298769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:59:50.299066 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:59:50.299202 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:59:50.299411 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:59:50.299888 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:59:50.309386 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:59:50.314361 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:59:50.320974 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:50.321332 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:50.321713 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:59:50.322060 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:59:50.322284 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:59:50.322767 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:59:50.323122 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:59:50.323407 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:59:50.323726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:59:50.324068 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:59:50.324564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:59:50.324878 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:59:50.325220 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:59:50.325521 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:59:50.325846 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:59:50.326074 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:59:50.326145 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:59:50.326707 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:50.326878 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:50.327315 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:59:50.327367 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:50.327657 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:59:50.327727 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:59:50.328258 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:59:50.328331 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:59:50.328780 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:59:50.329032 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:59:50.331252 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:50.331416 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:59:50.331682 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:59:50.331858 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:59:50.331909 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:59:50.332066 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:59:50.332113 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:59:50.332289 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:59:50.332348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:59:50.332597 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:59:50.332663 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:59:50.344343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:59:50.345885 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:59:50.347431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:59:50.347659 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:50.347986 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:59:50.348190 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:59:50.352740 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:59:50.352891 ignition[1027]: INFO : Ignition 2.19.0 Jul 6 23:59:50.352891 ignition[1027]: INFO : Stage: umount Jul 6 23:59:50.353259 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:50.353259 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:50.353400 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:59:50.355723 ignition[1027]: INFO : umount: umount passed Jul 6 23:59:50.355723 ignition[1027]: INFO : Ignition finished successfully Jul 6 23:59:50.356718 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:59:50.356910 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:59:50.357339 systemd[1]: Stopped target network.target - Network. Jul 6 23:59:50.357547 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:59:50.357672 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:59:50.357889 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:59:50.357912 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:59:50.358272 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:59:50.358296 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:59:50.358507 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:59:50.358529 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:59:50.358810 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:59:50.359195 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:59:50.361963 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:59:50.362023 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:59:50.362361 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:59:50.362384 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:50.366331 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:59:50.366430 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:59:50.366458 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:59:50.366584 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 6 23:59:50.366606 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 6 23:59:50.366768 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:50.371602 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:59:50.371816 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:59:50.372497 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:59:50.372984 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:59:50.373457 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:59:50.373496 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:50.373860 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:59:50.373883 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:50.374171 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:59:50.374196 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:50.376656 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:59:50.376739 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:50.377013 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:59:50.377040 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:50.377291 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:59:50.377379 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:50.377548 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:59:50.377570 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:59:50.377836 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:59:50.377858 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:59:50.378137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:59:50.378159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:50.382382 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:59:50.382492 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:59:50.382520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:50.382811 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:59:50.382835 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:50.382953 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:59:50.382975 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:50.383098 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:59:50.383128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:50.383799 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:59:50.385605 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:59:50.386409 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:59:50.634561 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:59:50.634863 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:59:50.635306 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:59:50.635607 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:59:50.635806 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:59:50.640422 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:59:50.651999 systemd[1]: Switching root. Jul 6 23:59:50.686221 systemd-journald[215]: Journal stopped Jul 6 23:59:40.742789 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:59:40.742806 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:40.742812 kernel: Disabled fast string operations Jul 6 23:59:40.742816 kernel: BIOS-provided physical RAM map: Jul 6 23:59:40.742820 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 6 23:59:40.742824 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 6 23:59:40.742830 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 6 23:59:40.742834 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 6 23:59:40.742838 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 6 23:59:40.742842 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 6 23:59:40.742847 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 6 23:59:40.742851 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 6 23:59:40.742855 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 6 23:59:40.742859 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 6 23:59:40.742865 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 6 23:59:40.742870 kernel: NX (Execute Disable) protection: active Jul 6 23:59:40.742874 kernel: APIC: Static calls initialized Jul 6 23:59:40.742879 kernel: SMBIOS 2.7 present. Jul 6 23:59:40.742884 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 6 23:59:40.742888 kernel: vmware: hypercall mode: 0x00 Jul 6 23:59:40.742893 kernel: Hypervisor detected: VMware Jul 6 23:59:40.742897 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 6 23:59:40.742903 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 6 23:59:40.742907 kernel: vmware: using clock offset of 2579820856 ns Jul 6 23:59:40.742912 kernel: tsc: Detected 3408.000 MHz processor Jul 6 23:59:40.742917 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:59:40.742922 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:59:40.742927 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 6 23:59:40.742932 kernel: total RAM covered: 3072M Jul 6 23:59:40.742936 kernel: Found optimal setting for mtrr clean up Jul 6 23:59:40.742943 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 6 23:59:40.742949 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 6 23:59:40.742954 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:59:40.742958 kernel: Using GB pages for direct mapping Jul 6 23:59:40.742963 kernel: ACPI: Early table checksum verification disabled Jul 6 23:59:40.742968 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 6 23:59:40.742972 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 6 23:59:40.742977 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 6 23:59:40.742982 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 6 23:59:40.742987 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 6 23:59:40.742994 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 6 23:59:40.742999 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 6 23:59:40.743005 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 6 23:59:40.743010 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 6 23:59:40.743015 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 6 23:59:40.743021 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 6 23:59:40.743026 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 6 23:59:40.743031 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 6 23:59:40.743036 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 6 23:59:40.743041 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 6 23:59:40.743046 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 6 23:59:40.743051 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 6 23:59:40.743056 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 6 23:59:40.743061 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 6 23:59:40.743066 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 6 23:59:40.743072 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 6 23:59:40.743076 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 6 23:59:40.743081 kernel: system APIC only can use physical flat Jul 6 23:59:40.743086 kernel: APIC: Switched APIC routing to: physical flat Jul 6 23:59:40.743091 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:59:40.743096 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 6 23:59:40.743101 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 6 23:59:40.743106 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 6 23:59:40.743111 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 6 23:59:40.743117 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 6 23:59:40.743121 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 6 23:59:40.743126 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 6 23:59:40.743131 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 6 23:59:40.743136 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 6 23:59:40.743141 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 6 23:59:40.743146 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 6 23:59:40.743151 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 6 23:59:40.743155 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 6 23:59:40.743160 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 6 23:59:40.743166 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 6 23:59:40.743171 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 6 23:59:40.743176 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 6 23:59:40.743180 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 6 23:59:40.743185 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 6 23:59:40.743190 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 6 23:59:40.743195 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 6 23:59:40.743200 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 6 23:59:40.743204 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 6 23:59:40.743209 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 6 23:59:40.743215 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 6 23:59:40.743220 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 6 23:59:40.743225 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 6 23:59:40.743264 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 6 23:59:40.743271 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 6 23:59:40.743276 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 6 23:59:40.743280 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 6 23:59:40.743285 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 6 23:59:40.743290 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 6 23:59:40.743295 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 6 23:59:40.743302 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 6 23:59:40.743307 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 6 23:59:40.743312 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 6 23:59:40.743317 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 6 23:59:40.743322 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 6 23:59:40.743326 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 6 23:59:40.743331 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 6 23:59:40.743336 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 6 23:59:40.743341 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 6 23:59:40.743346 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 6 23:59:40.743352 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 6 23:59:40.743357 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 6 23:59:40.743362 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 6 23:59:40.743367 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 6 23:59:40.743371 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 6 23:59:40.743376 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 6 23:59:40.743381 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 6 23:59:40.743386 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 6 23:59:40.743390 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 6 23:59:40.743395 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 6 23:59:40.743400 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 6 23:59:40.743406 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 6 23:59:40.743411 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 6 23:59:40.743416 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 6 23:59:40.743425 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 6 23:59:40.743430 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 6 23:59:40.743436 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 6 23:59:40.743441 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 6 23:59:40.743446 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 6 23:59:40.743452 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 6 23:59:40.743457 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 6 23:59:40.743462 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 6 23:59:40.743468 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 6 23:59:40.743473 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 6 23:59:40.743478 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 6 23:59:40.743483 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 6 23:59:40.743488 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 6 23:59:40.743493 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 6 23:59:40.743498 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 6 23:59:40.743505 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 6 23:59:40.743510 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 6 23:59:40.743515 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 6 23:59:40.743520 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 6 23:59:40.743526 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 6 23:59:40.743531 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 6 23:59:40.743536 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 6 23:59:40.743541 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 6 23:59:40.743546 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 6 23:59:40.743551 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 6 23:59:40.743557 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 6 23:59:40.743563 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 6 23:59:40.743568 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 6 23:59:40.743573 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 6 23:59:40.743578 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 6 23:59:40.743583 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 6 23:59:40.743588 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 6 23:59:40.743593 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 6 23:59:40.743598 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 6 23:59:40.743604 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 6 23:59:40.743610 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 6 23:59:40.743615 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 6 23:59:40.743620 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 6 23:59:40.743625 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 6 23:59:40.743630 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 6 23:59:40.743635 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 6 23:59:40.743641 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 6 23:59:40.743646 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 6 23:59:40.743651 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 6 23:59:40.743656 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 6 23:59:40.743662 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 6 23:59:40.743667 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 6 23:59:40.743672 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 6 23:59:40.743678 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 6 23:59:40.743683 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 6 23:59:40.743688 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 6 23:59:40.743693 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 6 23:59:40.743698 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 6 23:59:40.743704 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 6 23:59:40.743709 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 6 23:59:40.743714 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 6 23:59:40.743720 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 6 23:59:40.743725 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 6 23:59:40.743730 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 6 23:59:40.743735 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 6 23:59:40.743741 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 6 23:59:40.743746 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 6 23:59:40.743751 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 6 23:59:40.743756 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 6 23:59:40.743761 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 6 23:59:40.743766 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 6 23:59:40.743773 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 6 23:59:40.743778 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 6 23:59:40.743783 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 6 23:59:40.743788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 6 23:59:40.743794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 6 23:59:40.743799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 6 23:59:40.743805 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 6 23:59:40.743810 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 6 23:59:40.743816 kernel: Zone ranges: Jul 6 23:59:40.743822 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:59:40.743827 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 6 23:59:40.743833 kernel: Normal empty Jul 6 23:59:40.743838 kernel: Movable zone start for each node Jul 6 23:59:40.743843 kernel: Early memory node ranges Jul 6 23:59:40.743849 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 6 23:59:40.743854 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 6 23:59:40.743859 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 6 23:59:40.743865 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 6 23:59:40.743870 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:59:40.743876 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 6 23:59:40.743881 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 6 23:59:40.743887 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 6 23:59:40.743892 kernel: system APIC only can use physical flat Jul 6 23:59:40.743897 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 6 23:59:40.743903 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 6 23:59:40.743908 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 6 23:59:40.743913 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 6 23:59:40.743918 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 6 23:59:40.743924 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 6 23:59:40.743930 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 6 23:59:40.743935 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 6 23:59:40.743941 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 6 23:59:40.743946 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 6 23:59:40.743951 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 6 23:59:40.743956 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 6 23:59:40.743961 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 6 23:59:40.743967 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 6 23:59:40.743972 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 6 23:59:40.743978 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 6 23:59:40.743983 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 6 23:59:40.743988 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 6 23:59:40.743994 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 6 23:59:40.743999 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 6 23:59:40.744004 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 6 23:59:40.744009 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 6 23:59:40.744015 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 6 23:59:40.744020 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 6 23:59:40.744026 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 6 23:59:40.744031 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 6 23:59:40.744037 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 6 23:59:40.744042 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 6 23:59:40.744047 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 6 23:59:40.744052 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 6 23:59:40.744057 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 6 23:59:40.744062 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 6 23:59:40.744068 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 6 23:59:40.744073 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 6 23:59:40.744079 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 6 23:59:40.744084 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 6 23:59:40.744090 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 6 23:59:40.744095 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 6 23:59:40.744100 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 6 23:59:40.744106 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 6 23:59:40.744111 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 6 23:59:40.744116 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 6 23:59:40.744121 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 6 23:59:40.744127 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 6 23:59:40.744132 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 6 23:59:40.744138 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 6 23:59:40.744143 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 6 23:59:40.744148 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 6 23:59:40.744153 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 6 23:59:40.744159 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 6 23:59:40.744164 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 6 23:59:40.744169 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 6 23:59:40.744174 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 6 23:59:40.744181 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 6 23:59:40.744186 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 6 23:59:40.744191 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 6 23:59:40.744196 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 6 23:59:40.744202 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 6 23:59:40.744207 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 6 23:59:40.744212 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 6 23:59:40.744217 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 6 23:59:40.744222 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 6 23:59:40.744235 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 6 23:59:40.744246 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 6 23:59:40.744252 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 6 23:59:40.744257 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 6 23:59:40.744262 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 6 23:59:40.744267 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 6 23:59:40.744273 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 6 23:59:40.744278 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 6 23:59:40.744283 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 6 23:59:40.744288 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 6 23:59:40.744295 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 6 23:59:40.744300 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 6 23:59:40.744305 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 6 23:59:40.744311 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 6 23:59:40.744316 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 6 23:59:40.744321 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 6 23:59:40.744326 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 6 23:59:40.744331 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 6 23:59:40.744337 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 6 23:59:40.744343 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 6 23:59:40.744348 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 6 23:59:40.744353 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 6 23:59:40.744358 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 6 23:59:40.744363 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 6 23:59:40.744369 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 6 23:59:40.744374 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 6 23:59:40.744379 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 6 23:59:40.744384 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 6 23:59:40.744389 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 6 23:59:40.744396 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 6 23:59:40.744401 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 6 23:59:40.744406 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 6 23:59:40.744411 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 6 23:59:40.744416 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 6 23:59:40.744422 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 6 23:59:40.744427 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 6 23:59:40.744432 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 6 23:59:40.744437 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 6 23:59:40.744442 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 6 23:59:40.744449 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 6 23:59:40.744454 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 6 23:59:40.744459 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 6 23:59:40.744465 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 6 23:59:40.744470 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 6 23:59:40.744475 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 6 23:59:40.744480 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 6 23:59:40.744485 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 6 23:59:40.744490 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 6 23:59:40.744497 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 6 23:59:40.744502 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 6 23:59:40.744507 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 6 23:59:40.744512 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 6 23:59:40.744518 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 6 23:59:40.744523 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 6 23:59:40.744528 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 6 23:59:40.744533 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 6 23:59:40.744539 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 6 23:59:40.744544 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 6 23:59:40.744550 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 6 23:59:40.744555 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 6 23:59:40.744561 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 6 23:59:40.744566 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 6 23:59:40.744571 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 6 23:59:40.744576 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 6 23:59:40.744582 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 6 23:59:40.744587 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 6 23:59:40.744592 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:59:40.744598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 6 23:59:40.744604 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:59:40.744609 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 6 23:59:40.744614 kernel: TSC deadline timer available Jul 6 23:59:40.744620 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 6 23:59:40.744625 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 6 23:59:40.744631 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 6 23:59:40.744636 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:59:40.744642 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 6 23:59:40.744648 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 6 23:59:40.744653 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 6 23:59:40.744659 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 6 23:59:40.744664 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 6 23:59:40.744669 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 6 23:59:40.744674 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 6 23:59:40.744680 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 6 23:59:40.744691 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 6 23:59:40.744698 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 6 23:59:40.744705 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 6 23:59:40.744710 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 6 23:59:40.744716 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 6 23:59:40.744721 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 6 23:59:40.744727 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 6 23:59:40.744732 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 6 23:59:40.744738 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 6 23:59:40.744747 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 6 23:59:40.744752 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 6 23:59:40.744760 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:40.744766 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:59:40.744771 kernel: random: crng init done Jul 6 23:59:40.744777 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 6 23:59:40.744783 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 6 23:59:40.744788 kernel: printk: log_buf_len min size: 262144 bytes Jul 6 23:59:40.744794 kernel: printk: log_buf_len: 1048576 bytes Jul 6 23:59:40.744799 kernel: printk: early log buf free: 239648(91%) Jul 6 23:59:40.744806 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:59:40.744812 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:59:40.744817 kernel: Fallback order for Node 0: 0 Jul 6 23:59:40.744823 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 6 23:59:40.744829 kernel: Policy zone: DMA32 Jul 6 23:59:40.744835 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:59:40.744841 kernel: Memory: 1936372K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 159996K reserved, 0K cma-reserved) Jul 6 23:59:40.744848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 6 23:59:40.744854 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:59:40.744859 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:59:40.744865 kernel: Dynamic Preempt: voluntary Jul 6 23:59:40.744871 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:59:40.744877 kernel: rcu: RCU event tracing is enabled. Jul 6 23:59:40.744882 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 6 23:59:40.744888 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:59:40.744895 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:59:40.744900 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:59:40.744906 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:59:40.744912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 6 23:59:40.744917 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 6 23:59:40.744923 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 6 23:59:40.744929 kernel: Console: colour VGA+ 80x25 Jul 6 23:59:40.744934 kernel: printk: console [tty0] enabled Jul 6 23:59:40.744940 kernel: printk: console [ttyS0] enabled Jul 6 23:59:40.744947 kernel: ACPI: Core revision 20230628 Jul 6 23:59:40.744952 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 6 23:59:40.744958 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:59:40.744964 kernel: x2apic enabled Jul 6 23:59:40.744970 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:59:40.744975 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:59:40.744981 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 6 23:59:40.744987 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 6 23:59:40.744993 kernel: Disabled fast string operations Jul 6 23:59:40.744999 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:59:40.745005 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:59:40.745011 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:59:40.745017 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 6 23:59:40.745022 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 6 23:59:40.745028 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 6 23:59:40.745034 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 6 23:59:40.745040 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 6 23:59:40.745045 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:59:40.745052 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:59:40.745058 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:59:40.745064 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 6 23:59:40.745069 kernel: GDS: Unknown: Dependent on hypervisor status Jul 6 23:59:40.745075 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:59:40.745080 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:59:40.745086 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:59:40.745092 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:59:40.745097 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:59:40.745104 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:59:40.745110 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:59:40.745116 kernel: pid_max: default: 131072 minimum: 1024 Jul 6 23:59:40.745121 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:59:40.745127 kernel: landlock: Up and running. Jul 6 23:59:40.745133 kernel: SELinux: Initializing. Jul 6 23:59:40.745138 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:59:40.745144 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:59:40.745150 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 6 23:59:40.745157 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:59:40.745163 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:59:40.745168 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:59:40.745174 kernel: Performance Events: Skylake events, core PMU driver. Jul 6 23:59:40.745180 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 6 23:59:40.745185 kernel: core: CPUID marked event: 'instructions' unavailable Jul 6 23:59:40.745191 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 6 23:59:40.745196 kernel: core: CPUID marked event: 'cache references' unavailable Jul 6 23:59:40.745203 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 6 23:59:40.745208 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 6 23:59:40.745214 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 6 23:59:40.745219 kernel: ... version: 1 Jul 6 23:59:40.745225 kernel: ... bit width: 48 Jul 6 23:59:40.745244 kernel: ... generic registers: 4 Jul 6 23:59:40.745250 kernel: ... value mask: 0000ffffffffffff Jul 6 23:59:40.745256 kernel: ... max period: 000000007fffffff Jul 6 23:59:40.745262 kernel: ... fixed-purpose events: 0 Jul 6 23:59:40.745269 kernel: ... event mask: 000000000000000f Jul 6 23:59:40.745275 kernel: signal: max sigframe size: 1776 Jul 6 23:59:40.745281 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:59:40.745287 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:59:40.745292 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:59:40.745298 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:59:40.745303 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:59:40.745309 kernel: .... node #0, CPUs: #1 Jul 6 23:59:40.745315 kernel: Disabled fast string operations Jul 6 23:59:40.745320 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 6 23:59:40.745327 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 6 23:59:40.745332 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:59:40.745338 kernel: smpboot: Max logical packages: 128 Jul 6 23:59:40.745344 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 6 23:59:40.745350 kernel: devtmpfs: initialized Jul 6 23:59:40.745356 kernel: x86/mm: Memory block size: 128MB Jul 6 23:59:40.745361 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 6 23:59:40.745367 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:59:40.745373 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 6 23:59:40.745380 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:59:40.745385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:59:40.745391 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:59:40.745397 kernel: audit: type=2000 audit(1751846379.086:1): state=initialized audit_enabled=0 res=1 Jul 6 23:59:40.745402 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:59:40.745408 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:59:40.745413 kernel: cpuidle: using governor menu Jul 6 23:59:40.745419 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 6 23:59:40.745425 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:59:40.745431 kernel: dca service started, version 1.12.1 Jul 6 23:59:40.745437 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 6 23:59:40.745443 kernel: PCI: Using configuration type 1 for base access Jul 6 23:59:40.745449 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:59:40.745455 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:59:40.745460 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:59:40.745472 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:59:40.745477 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:59:40.745483 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:59:40.745490 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:59:40.745496 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:59:40.745501 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:59:40.745507 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 6 23:59:40.745513 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:59:40.745518 kernel: ACPI: Interpreter enabled Jul 6 23:59:40.745524 kernel: ACPI: PM: (supports S0 S1 S5) Jul 6 23:59:40.745529 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:59:40.745535 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:59:40.745542 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:59:40.745548 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 6 23:59:40.745553 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 6 23:59:40.745634 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:59:40.745693 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 6 23:59:40.745747 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 6 23:59:40.745776 kernel: PCI host bridge to bus 0000:00 Jul 6 23:59:40.745847 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:59:40.745895 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 6 23:59:40.745939 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:59:40.745984 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:59:40.746028 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 6 23:59:40.746072 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 6 23:59:40.746129 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 6 23:59:40.746188 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 6 23:59:40.746254 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 6 23:59:40.746309 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 6 23:59:40.746359 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 6 23:59:40.746408 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 6 23:59:40.746458 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 6 23:59:40.746511 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 6 23:59:40.746561 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 6 23:59:40.746615 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 6 23:59:40.746665 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 6 23:59:40.746718 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 6 23:59:40.746773 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 6 23:59:40.746826 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 6 23:59:40.746876 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 6 23:59:40.746929 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 6 23:59:40.746979 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 6 23:59:40.747041 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 6 23:59:40.747094 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 6 23:59:40.747142 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 6 23:59:40.747194 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:59:40.747263 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 6 23:59:40.747319 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747369 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747423 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747474 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747527 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747581 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747635 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747685 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747741 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747828 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747884 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.747938 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.747991 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748041 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748094 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748144 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748198 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748593 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748657 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748710 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748774 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748827 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748886 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.748936 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.748990 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749040 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749096 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749181 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749251 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749303 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749356 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749407 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749461 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749511 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749564 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749618 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749671 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749722 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749776 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749826 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749881 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.749935 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.749988 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750040 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750096 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750147 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750201 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750270 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750327 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750378 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750432 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750483 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750537 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750591 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750645 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750696 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750751 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750802 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750855 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.750906 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.750964 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.751015 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.751070 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:59:40.751121 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.751173 kernel: pci_bus 0000:01: extended config space not accessible Jul 6 23:59:40.751226 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:59:40.751306 kernel: pci_bus 0000:02: extended config space not accessible Jul 6 23:59:40.751315 kernel: acpiphp: Slot [32] registered Jul 6 23:59:40.751321 kernel: acpiphp: Slot [33] registered Jul 6 23:59:40.751327 kernel: acpiphp: Slot [34] registered Jul 6 23:59:40.751332 kernel: acpiphp: Slot [35] registered Jul 6 23:59:40.751338 kernel: acpiphp: Slot [36] registered Jul 6 23:59:40.751344 kernel: acpiphp: Slot [37] registered Jul 6 23:59:40.751349 kernel: acpiphp: Slot [38] registered Jul 6 23:59:40.751355 kernel: acpiphp: Slot [39] registered Jul 6 23:59:40.751363 kernel: acpiphp: Slot [40] registered Jul 6 23:59:40.751368 kernel: acpiphp: Slot [41] registered Jul 6 23:59:40.751374 kernel: acpiphp: Slot [42] registered Jul 6 23:59:40.751380 kernel: acpiphp: Slot [43] registered Jul 6 23:59:40.751385 kernel: acpiphp: Slot [44] registered Jul 6 23:59:40.751391 kernel: acpiphp: Slot [45] registered Jul 6 23:59:40.751397 kernel: acpiphp: Slot [46] registered Jul 6 23:59:40.751402 kernel: acpiphp: Slot [47] registered Jul 6 23:59:40.751408 kernel: acpiphp: Slot [48] registered Jul 6 23:59:40.751414 kernel: acpiphp: Slot [49] registered Jul 6 23:59:40.751420 kernel: acpiphp: Slot [50] registered Jul 6 23:59:40.751426 kernel: acpiphp: Slot [51] registered Jul 6 23:59:40.751431 kernel: acpiphp: Slot [52] registered Jul 6 23:59:40.751437 kernel: acpiphp: Slot [53] registered Jul 6 23:59:40.751443 kernel: acpiphp: Slot [54] registered Jul 6 23:59:40.751448 kernel: acpiphp: Slot [55] registered Jul 6 23:59:40.751454 kernel: acpiphp: Slot [56] registered Jul 6 23:59:40.751459 kernel: acpiphp: Slot [57] registered Jul 6 23:59:40.751465 kernel: acpiphp: Slot [58] registered Jul 6 23:59:40.751472 kernel: acpiphp: Slot [59] registered Jul 6 23:59:40.751477 kernel: acpiphp: Slot [60] registered Jul 6 23:59:40.751483 kernel: acpiphp: Slot [61] registered Jul 6 23:59:40.751489 kernel: acpiphp: Slot [62] registered Jul 6 23:59:40.751494 kernel: acpiphp: Slot [63] registered Jul 6 23:59:40.751544 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 6 23:59:40.751596 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 6 23:59:40.753242 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 6 23:59:40.753311 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:59:40.753366 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 6 23:59:40.753418 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 6 23:59:40.753467 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 6 23:59:40.753517 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 6 23:59:40.753567 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 6 23:59:40.753624 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 6 23:59:40.753680 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 6 23:59:40.753732 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 6 23:59:40.753784 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 6 23:59:40.753835 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 6 23:59:40.753886 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 6 23:59:40.753938 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 6 23:59:40.753989 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 6 23:59:40.754039 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 6 23:59:40.754093 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 6 23:59:40.754144 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 6 23:59:40.754195 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 6 23:59:40.754263 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:59:40.754317 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 6 23:59:40.754367 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 6 23:59:40.754417 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 6 23:59:40.754471 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:59:40.754522 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 6 23:59:40.754572 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 6 23:59:40.754622 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:59:40.754673 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 6 23:59:40.754723 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 6 23:59:40.754778 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:59:40.754831 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 6 23:59:40.754881 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 6 23:59:40.754931 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:59:40.754982 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 6 23:59:40.755033 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 6 23:59:40.755084 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:59:40.755137 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 6 23:59:40.755188 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 6 23:59:40.755251 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:59:40.755310 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 6 23:59:40.755364 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 6 23:59:40.755415 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 6 23:59:40.755466 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 6 23:59:40.755520 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 6 23:59:40.755572 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 6 23:59:40.755623 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 6 23:59:40.755674 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:59:40.755725 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 6 23:59:40.755815 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 6 23:59:40.755865 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 6 23:59:40.755915 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 6 23:59:40.755969 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 6 23:59:40.756019 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 6 23:59:40.756068 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 6 23:59:40.756118 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:59:40.756169 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 6 23:59:40.756220 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 6 23:59:40.756907 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 6 23:59:40.756965 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:59:40.757023 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 6 23:59:40.757074 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 6 23:59:40.757125 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:59:40.757177 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 6 23:59:40.757227 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 6 23:59:40.757294 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:59:40.757345 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 6 23:59:40.757400 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 6 23:59:40.757451 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:59:40.757502 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 6 23:59:40.757553 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 6 23:59:40.757603 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:59:40.757654 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 6 23:59:40.757704 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 6 23:59:40.757754 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:59:40.757808 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 6 23:59:40.757858 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 6 23:59:40.757908 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 6 23:59:40.757957 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:59:40.758009 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 6 23:59:40.758060 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 6 23:59:40.758110 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 6 23:59:40.758160 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:59:40.758214 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 6 23:59:40.758280 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 6 23:59:40.758330 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 6 23:59:40.758380 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:59:40.758431 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 6 23:59:40.758481 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 6 23:59:40.758531 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:59:40.758585 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 6 23:59:40.758635 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 6 23:59:40.758685 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:59:40.758736 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 6 23:59:40.758791 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 6 23:59:40.758841 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:59:40.758892 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 6 23:59:40.758942 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 6 23:59:40.758992 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:59:40.759046 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 6 23:59:40.759097 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 6 23:59:40.759147 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:59:40.759198 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 6 23:59:40.759373 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 6 23:59:40.759426 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 6 23:59:40.759477 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:59:40.759531 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 6 23:59:40.759581 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 6 23:59:40.759632 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 6 23:59:40.759724 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:59:40.759783 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 6 23:59:40.759833 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 6 23:59:40.759883 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:59:40.759934 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 6 23:59:40.759987 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 6 23:59:40.760037 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:59:40.760120 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 6 23:59:40.760174 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 6 23:59:40.760224 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:59:40.760285 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 6 23:59:40.760336 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 6 23:59:40.760387 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:59:40.760441 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 6 23:59:40.760491 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 6 23:59:40.760541 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:59:40.760593 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 6 23:59:40.760643 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 6 23:59:40.760693 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:59:40.760702 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 6 23:59:40.760708 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 6 23:59:40.760716 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 6 23:59:40.760722 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:59:40.760727 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 6 23:59:40.760733 kernel: iommu: Default domain type: Translated Jul 6 23:59:40.760739 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:59:40.760745 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:59:40.760751 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:59:40.760757 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 6 23:59:40.760762 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 6 23:59:40.760811 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 6 23:59:40.760864 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 6 23:59:40.760975 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:59:40.760985 kernel: vgaarb: loaded Jul 6 23:59:40.760992 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 6 23:59:40.760998 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 6 23:59:40.761003 kernel: clocksource: Switched to clocksource tsc-early Jul 6 23:59:40.761009 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:59:40.761015 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:59:40.761023 kernel: pnp: PnP ACPI init Jul 6 23:59:40.761086 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 6 23:59:40.761136 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 6 23:59:40.761182 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 6 23:59:40.761258 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 6 23:59:40.761315 kernel: pnp 00:06: [dma 2] Jul 6 23:59:40.763319 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 6 23:59:40.765304 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 6 23:59:40.765357 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 6 23:59:40.765366 kernel: pnp: PnP ACPI: found 8 devices Jul 6 23:59:40.765373 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:59:40.765379 kernel: NET: Registered PF_INET protocol family Jul 6 23:59:40.765385 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:59:40.765391 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:59:40.765397 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:59:40.765405 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:59:40.765411 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:59:40.765417 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:59:40.765423 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:59:40.765428 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:59:40.765434 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:59:40.765440 kernel: NET: Registered PF_XDP protocol family Jul 6 23:59:40.765493 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:59:40.765550 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 6 23:59:40.765602 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 6 23:59:40.765655 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 6 23:59:40.765706 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 6 23:59:40.765771 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 6 23:59:40.765864 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 6 23:59:40.765918 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 6 23:59:40.765969 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 6 23:59:40.766020 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 6 23:59:40.766070 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 6 23:59:40.766120 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 6 23:59:40.766171 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 6 23:59:40.766224 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 6 23:59:40.766284 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 6 23:59:40.766335 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 6 23:59:40.766387 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 6 23:59:40.766438 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 6 23:59:40.766490 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 6 23:59:40.766544 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 6 23:59:40.766595 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 6 23:59:40.766646 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 6 23:59:40.766697 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 6 23:59:40.766748 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:59:40.766799 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:59:40.766853 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.766903 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.766954 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.767004 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.767055 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.767105 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.767155 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.767206 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.769691 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.769749 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.769803 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.769855 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.769906 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.769957 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770007 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770057 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770112 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770162 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770214 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770280 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770333 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770384 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770434 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770485 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770539 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770590 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770641 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770692 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770742 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770793 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770843 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770893 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.770946 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.770996 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771046 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771096 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771146 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771196 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771534 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771592 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771649 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771701 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771755 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771807 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771858 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.771909 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.771960 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772011 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772076 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772154 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772284 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772348 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772424 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772482 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772545 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772605 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772659 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772718 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772788 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772854 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.772908 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.772969 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773030 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773090 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773143 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773203 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773291 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773344 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773397 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773448 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773498 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773548 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773599 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773651 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773701 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773752 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773803 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773854 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.773907 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.773957 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.774009 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.774059 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.774110 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 6 23:59:40.774160 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:59:40.774213 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:59:40.774314 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 6 23:59:40.774366 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 6 23:59:40.774419 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 6 23:59:40.774469 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:59:40.774522 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 6 23:59:40.774573 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 6 23:59:40.774626 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 6 23:59:40.774686 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 6 23:59:40.774751 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:59:40.774815 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 6 23:59:40.774872 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 6 23:59:40.774933 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 6 23:59:40.774995 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:59:40.775056 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 6 23:59:40.775110 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 6 23:59:40.775171 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 6 23:59:40.775245 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:59:40.775308 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 6 23:59:40.775371 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 6 23:59:40.775434 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:59:40.775485 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 6 23:59:40.775536 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 6 23:59:40.775586 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:59:40.775640 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 6 23:59:40.775691 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 6 23:59:40.775749 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:59:40.775804 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 6 23:59:40.775855 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 6 23:59:40.775905 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:59:40.775955 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 6 23:59:40.776005 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 6 23:59:40.776056 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:59:40.776110 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 6 23:59:40.776161 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 6 23:59:40.776212 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 6 23:59:40.776312 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 6 23:59:40.776364 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:59:40.776417 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 6 23:59:40.776468 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 6 23:59:40.776518 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 6 23:59:40.776569 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:59:40.776619 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 6 23:59:40.776669 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 6 23:59:40.776719 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 6 23:59:40.776772 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:59:40.776822 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 6 23:59:40.776873 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 6 23:59:40.776925 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:59:40.776976 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 6 23:59:40.777026 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 6 23:59:40.777077 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:59:40.777127 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 6 23:59:40.777177 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 6 23:59:40.777237 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:59:40.777301 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 6 23:59:40.777353 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 6 23:59:40.777403 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:59:40.777453 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 6 23:59:40.777503 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 6 23:59:40.777554 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:59:40.777605 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 6 23:59:40.777656 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 6 23:59:40.777707 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 6 23:59:40.777760 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:59:40.777856 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 6 23:59:40.777981 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 6 23:59:40.778034 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 6 23:59:40.778085 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:59:40.778137 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 6 23:59:40.778187 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 6 23:59:40.780256 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 6 23:59:40.780320 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:59:40.780379 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 6 23:59:40.780432 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 6 23:59:40.780483 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:59:40.780533 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 6 23:59:40.780583 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 6 23:59:40.780634 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:59:40.780685 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 6 23:59:40.780735 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 6 23:59:40.780786 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:59:40.780837 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 6 23:59:40.780890 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 6 23:59:40.780941 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:59:40.780993 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 6 23:59:40.781044 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 6 23:59:40.781094 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:59:40.781146 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 6 23:59:40.781196 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 6 23:59:40.781265 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 6 23:59:40.781318 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:59:40.781373 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 6 23:59:40.781424 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 6 23:59:40.781476 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 6 23:59:40.781527 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:59:40.781578 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 6 23:59:40.781711 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 6 23:59:40.781961 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:59:40.782017 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 6 23:59:40.782092 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 6 23:59:40.782146 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:59:40.782200 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 6 23:59:40.782259 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 6 23:59:40.782311 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:59:40.782361 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 6 23:59:40.782413 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 6 23:59:40.782463 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:59:40.782514 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 6 23:59:40.782566 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 6 23:59:40.782616 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:59:40.782670 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 6 23:59:40.782721 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 6 23:59:40.782772 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:59:40.782822 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 6 23:59:40.782869 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 6 23:59:40.782915 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 6 23:59:40.782959 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 6 23:59:40.783003 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 6 23:59:40.783053 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 6 23:59:40.783103 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 6 23:59:40.783150 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:59:40.783197 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 6 23:59:40.783251 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 6 23:59:40.783299 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 6 23:59:40.783345 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 6 23:59:40.783391 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 6 23:59:40.783446 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 6 23:59:40.783493 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 6 23:59:40.783539 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:59:40.784000 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 6 23:59:40.784054 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 6 23:59:40.784102 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:59:40.784154 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 6 23:59:40.784204 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 6 23:59:40.784391 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:59:40.784445 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 6 23:59:40.784492 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:59:40.784544 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 6 23:59:40.784591 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:59:40.784643 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 6 23:59:40.784734 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:59:40.784818 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 6 23:59:40.784885 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:59:40.784937 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 6 23:59:40.784984 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:59:40.785048 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 6 23:59:40.785095 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 6 23:59:40.785142 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:59:40.785191 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 6 23:59:40.785332 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 6 23:59:40.785384 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:59:40.785439 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 6 23:59:40.785487 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 6 23:59:40.785536 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:59:40.785586 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 6 23:59:40.785632 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:59:40.785684 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 6 23:59:40.785731 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:59:40.785789 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 6 23:59:40.785836 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:59:40.785885 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 6 23:59:40.785932 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:59:40.785981 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 6 23:59:40.786028 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:59:40.786080 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 6 23:59:40.786127 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 6 23:59:40.786173 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:59:40.786223 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 6 23:59:40.786291 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 6 23:59:40.786338 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:59:40.786388 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 6 23:59:40.786438 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 6 23:59:40.786484 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:59:40.786534 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 6 23:59:40.786580 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:59:40.786631 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 6 23:59:40.786678 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:59:40.786733 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 6 23:59:40.786780 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:59:40.786831 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 6 23:59:40.786879 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:59:40.786931 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 6 23:59:40.787070 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:59:40.787127 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 6 23:59:40.787175 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 6 23:59:40.787225 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:59:40.787305 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 6 23:59:40.787352 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 6 23:59:40.787399 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:59:40.787453 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 6 23:59:40.787500 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:59:40.787550 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 6 23:59:40.787597 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:59:40.787648 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 6 23:59:40.787696 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:59:40.787749 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 6 23:59:40.787800 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:59:40.787850 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 6 23:59:40.787899 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:59:40.787949 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 6 23:59:40.787997 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:59:40.788052 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:59:40.788064 kernel: PCI: CLS 32 bytes, default 64 Jul 6 23:59:40.788071 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:59:40.788078 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 6 23:59:40.788085 kernel: clocksource: Switched to clocksource tsc Jul 6 23:59:40.788091 kernel: Initialise system trusted keyrings Jul 6 23:59:40.788097 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:59:40.788103 kernel: Key type asymmetric registered Jul 6 23:59:40.788109 kernel: Asymmetric key parser 'x509' registered Jul 6 23:59:40.788116 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:59:40.788123 kernel: io scheduler mq-deadline registered Jul 6 23:59:40.788128 kernel: io scheduler kyber registered Jul 6 23:59:40.788135 kernel: io scheduler bfq registered Jul 6 23:59:40.788188 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 6 23:59:40.788271 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788325 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 6 23:59:40.788377 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788428 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 6 23:59:40.788483 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788534 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 6 23:59:40.788585 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788637 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 6 23:59:40.788687 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788738 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 6 23:59:40.788797 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788849 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 6 23:59:40.788900 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.788952 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 6 23:59:40.789003 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789057 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 6 23:59:40.789108 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789160 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 6 23:59:40.789211 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789275 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 6 23:59:40.789327 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789379 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 6 23:59:40.789435 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789487 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 6 23:59:40.789539 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789590 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 6 23:59:40.789642 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789696 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 6 23:59:40.789748 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789799 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 6 23:59:40.789851 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.789903 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 6 23:59:40.789954 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.790009 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 6 23:59:40.790060 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.790111 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 6 23:59:40.790162 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.790213 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 6 23:59:40.791810 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.791876 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 6 23:59:40.791931 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.791985 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 6 23:59:40.792037 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792090 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 6 23:59:40.792142 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792197 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 6 23:59:40.792300 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792354 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 6 23:59:40.792406 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792457 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 6 23:59:40.792509 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792564 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 6 23:59:40.792615 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792667 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 6 23:59:40.792718 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792769 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 6 23:59:40.792824 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792875 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 6 23:59:40.792926 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.792977 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 6 23:59:40.793028 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.793079 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 6 23:59:40.793134 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:59:40.793143 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:59:40.793150 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:59:40.793156 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:59:40.793162 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 6 23:59:40.793168 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:59:40.793175 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:59:40.793238 kernel: rtc_cmos 00:01: registered as rtc0 Jul 6 23:59:40.793295 kernel: rtc_cmos 00:01: setting system clock to 2025-07-06T23:59:40 UTC (1751846380) Jul 6 23:59:40.793346 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 6 23:59:40.793355 kernel: intel_pstate: CPU model not supported Jul 6 23:59:40.793362 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:59:40.793368 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:59:40.793374 kernel: Segment Routing with IPv6 Jul 6 23:59:40.793380 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:59:40.793389 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:59:40.793395 kernel: Key type dns_resolver registered Jul 6 23:59:40.793401 kernel: IPI shorthand broadcast: enabled Jul 6 23:59:40.793408 kernel: sched_clock: Marking stable (895003788, 220701472)->(1175741609, -60036349) Jul 6 23:59:40.793414 kernel: registered taskstats version 1 Jul 6 23:59:40.793420 kernel: Loading compiled-in X.509 certificates Jul 6 23:59:40.793426 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:59:40.793432 kernel: Key type .fscrypt registered Jul 6 23:59:40.793438 kernel: Key type fscrypt-provisioning registered Jul 6 23:59:40.793446 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:59:40.793452 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:59:40.793458 kernel: ima: No architecture policies found Jul 6 23:59:40.793464 kernel: clk: Disabling unused clocks Jul 6 23:59:40.793470 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:59:40.793476 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:59:40.793482 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:59:40.793489 kernel: Run /init as init process Jul 6 23:59:40.793495 kernel: with arguments: Jul 6 23:59:40.793503 kernel: /init Jul 6 23:59:40.793509 kernel: with environment: Jul 6 23:59:40.793515 kernel: HOME=/ Jul 6 23:59:40.793521 kernel: TERM=linux Jul 6 23:59:40.793527 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:59:40.793534 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:59:40.793542 systemd[1]: Detected virtualization vmware. Jul 6 23:59:40.793548 systemd[1]: Detected architecture x86-64. Jul 6 23:59:40.793556 systemd[1]: Running in initrd. Jul 6 23:59:40.793562 systemd[1]: No hostname configured, using default hostname. Jul 6 23:59:40.793568 systemd[1]: Hostname set to . Jul 6 23:59:40.793574 systemd[1]: Initializing machine ID from random generator. Jul 6 23:59:40.793580 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:59:40.793587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:40.793593 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:40.793599 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:59:40.793607 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:59:40.793613 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:59:40.793619 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:59:40.793627 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:59:40.793633 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:59:40.793640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:40.793646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:40.793653 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:59:40.793660 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:59:40.793666 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:59:40.793673 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:59:40.793679 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:59:40.793685 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:59:40.793691 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:59:40.793698 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:59:40.793706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:40.793712 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:40.793718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:40.793725 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:59:40.793731 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:59:40.793737 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:59:40.793745 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:59:40.793751 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:59:40.793757 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:59:40.793765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:59:40.793771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:40.793789 systemd-journald[215]: Collecting audit messages is disabled. Jul 6 23:59:40.793805 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:59:40.793814 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:40.793820 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:59:40.793827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:59:40.793833 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:40.793841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:40.793848 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:59:40.793854 kernel: Bridge firewalling registered Jul 6 23:59:40.793860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:40.793867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:59:40.793873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:40.793879 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:59:40.793886 systemd-journald[215]: Journal started Jul 6 23:59:40.793901 systemd-journald[215]: Runtime Journal (/run/log/journal/81d115fa061e48cc8370e396b0420ea6) is 4.8M, max 38.6M, 33.8M free. Jul 6 23:59:40.761066 systemd-modules-load[216]: Inserted module 'overlay' Jul 6 23:59:40.787015 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 6 23:59:40.796782 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:59:40.799495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:59:40.799802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:40.802376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:40.805309 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:59:40.806037 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:40.807620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:40.809217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:59:40.815116 dracut-cmdline[245]: dracut-dracut-053 Jul 6 23:59:40.817708 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:40.830057 systemd-resolved[248]: Positive Trust Anchors: Jul 6 23:59:40.830064 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:59:40.830086 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:59:40.832781 systemd-resolved[248]: Defaulting to hostname 'linux'. Jul 6 23:59:40.833336 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:59:40.833646 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:40.861245 kernel: SCSI subsystem initialized Jul 6 23:59:40.868253 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:59:40.875249 kernel: iscsi: registered transport (tcp) Jul 6 23:59:40.890244 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:59:40.890284 kernel: QLogic iSCSI HBA Driver Jul 6 23:59:40.910384 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:59:40.914341 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:59:40.929300 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:59:40.929348 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:59:40.930902 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:59:40.962250 kernel: raid6: avx2x4 gen() 51546 MB/s Jul 6 23:59:40.979256 kernel: raid6: avx2x2 gen() 52532 MB/s Jul 6 23:59:40.996450 kernel: raid6: avx2x1 gen() 44548 MB/s Jul 6 23:59:40.996501 kernel: raid6: using algorithm avx2x2 gen() 52532 MB/s Jul 6 23:59:41.014565 kernel: raid6: .... xor() 30738 MB/s, rmw enabled Jul 6 23:59:41.014613 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:59:41.028247 kernel: xor: automatically using best checksumming function avx Jul 6 23:59:41.127250 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:59:41.132238 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:59:41.136323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:41.143706 systemd-udevd[432]: Using default interface naming scheme 'v255'. Jul 6 23:59:41.146141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:41.153379 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:59:41.160246 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Jul 6 23:59:41.175398 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:59:41.180323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:59:41.252155 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:41.255390 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:59:41.262150 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:59:41.262794 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:59:41.263486 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:41.263683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:59:41.267341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:59:41.276604 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:59:41.318242 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 6 23:59:41.321020 kernel: vmw_pvscsi: using 64bit dma Jul 6 23:59:41.321041 kernel: vmw_pvscsi: max_id: 16 Jul 6 23:59:41.321050 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 6 23:59:41.324308 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 6 23:59:41.324326 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 6 23:59:41.324334 kernel: vmw_pvscsi: using MSI-X Jul 6 23:59:41.326761 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 6 23:59:41.326862 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 6 23:59:41.328284 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 6 23:59:41.340245 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 6 23:59:41.348259 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 6 23:59:41.352257 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 6 23:59:41.360772 kernel: libata version 3.00 loaded. Jul 6 23:59:41.360817 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:59:41.361016 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:59:41.366510 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 6 23:59:41.366613 kernel: scsi host1: ata_piix Jul 6 23:59:41.366697 kernel: scsi host2: ata_piix Jul 6 23:59:41.366760 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 6 23:59:41.366769 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 6 23:59:41.366776 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 6 23:59:41.361092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:41.361325 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:41.361421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:59:41.361488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:41.361589 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:41.378717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:41.389943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:41.394311 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:41.404679 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:41.532254 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 6 23:59:41.537253 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 6 23:59:41.548714 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:59:41.548788 kernel: AES CTR mode by8 optimization enabled Jul 6 23:59:41.552614 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 6 23:59:41.552793 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:59:41.552860 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 6 23:59:41.552923 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 6 23:59:41.553259 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 6 23:59:41.558248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:59:41.559264 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:59:41.562252 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 6 23:59:41.562362 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:59:41.583249 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:59:41.610246 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (481) Jul 6 23:59:41.612158 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 6 23:59:41.617273 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (493) Jul 6 23:59:41.617678 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 6 23:59:41.620537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 6 23:59:41.624754 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 6 23:59:41.625054 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 6 23:59:41.632317 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:59:41.657382 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:59:41.661771 kernel: GPT:disk_guids don't match. Jul 6 23:59:41.661802 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:59:41.661817 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:59:42.667300 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:59:42.668024 disk-uuid[589]: The operation has completed successfully. Jul 6 23:59:42.702611 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:59:42.702683 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:59:42.706333 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:59:42.708470 sh[609]: Success Jul 6 23:59:42.717252 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:59:42.766533 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:59:42.767399 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:59:42.767706 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:59:42.784297 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:59:42.784332 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:42.784342 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:59:42.786797 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:59:42.786809 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:59:42.792247 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:59:42.794430 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:59:42.802352 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 6 23:59:42.803968 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:59:42.820565 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:42.820601 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:42.820610 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:59:42.828250 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:59:42.841893 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:59:42.842288 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:42.845162 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:59:42.848435 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:59:42.868753 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 6 23:59:42.875363 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:59:42.941245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:59:42.948394 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:59:42.961256 systemd-networkd[803]: lo: Link UP Jul 6 23:59:42.961261 systemd-networkd[803]: lo: Gained carrier Jul 6 23:59:42.962322 systemd-networkd[803]: Enumeration completed Jul 6 23:59:42.962889 systemd-networkd[803]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 6 23:59:42.963320 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:59:42.963477 systemd[1]: Reached target network.target - Network. Jul 6 23:59:42.965669 ignition[670]: Ignition 2.19.0 Jul 6 23:59:42.967076 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 6 23:59:42.967202 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 6 23:59:42.965902 ignition[670]: Stage: fetch-offline Jul 6 23:59:42.965935 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:42.965941 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:42.967437 systemd-networkd[803]: ens192: Link UP Jul 6 23:59:42.966001 ignition[670]: parsed url from cmdline: "" Jul 6 23:59:42.967440 systemd-networkd[803]: ens192: Gained carrier Jul 6 23:59:42.966003 ignition[670]: no config URL provided Jul 6 23:59:42.966006 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:59:42.966011 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:59:42.966495 ignition[670]: config successfully fetched Jul 6 23:59:42.966513 ignition[670]: parsing config with SHA512: 4fcd0ca6487191b22d4b14da8cc2493cd529e1f8678e3fc27a3835cf950b1ec126d2d2b0caea4103c15fa7fe3f0388e9c4a994327ae60ee5bf512c2ef227c5a6 Jul 6 23:59:42.970215 unknown[670]: fetched base config from "system" Jul 6 23:59:42.970224 unknown[670]: fetched user config from "vmware" Jul 6 23:59:42.970660 ignition[670]: fetch-offline: fetch-offline passed Jul 6 23:59:42.970703 ignition[670]: Ignition finished successfully Jul 6 23:59:42.971634 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:59:42.971861 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:59:42.976367 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:59:42.984416 ignition[807]: Ignition 2.19.0 Jul 6 23:59:42.984423 ignition[807]: Stage: kargs Jul 6 23:59:42.984523 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:42.984529 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:42.985055 ignition[807]: kargs: kargs passed Jul 6 23:59:42.985080 ignition[807]: Ignition finished successfully Jul 6 23:59:42.986477 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:59:42.990346 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:59:42.998680 ignition[814]: Ignition 2.19.0 Jul 6 23:59:42.998689 ignition[814]: Stage: disks Jul 6 23:59:42.998808 ignition[814]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:42.998815 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:42.999447 ignition[814]: disks: disks passed Jul 6 23:59:42.999491 ignition[814]: Ignition finished successfully Jul 6 23:59:43.000238 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:59:43.000453 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:59:43.000569 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:59:43.000763 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:59:43.000952 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:59:43.001127 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:59:43.007341 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:59:43.153155 systemd-fsck[822]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 6 23:59:43.157766 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:59:43.162346 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:59:43.315387 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:59:43.315871 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:59:43.316365 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:59:43.326380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:59:43.332759 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:59:43.333264 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:59:43.333296 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:59:43.333314 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:59:43.338221 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:59:43.338954 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:59:43.381256 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (830) Jul 6 23:59:43.394184 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:43.394222 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:43.394245 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:59:43.438315 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:59:43.445354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:59:43.606769 initrd-setup-root[854]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:59:43.610963 initrd-setup-root[861]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:59:43.622122 initrd-setup-root[868]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:59:43.629025 initrd-setup-root[875]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:59:43.701303 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:59:43.706320 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:59:43.707785 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:59:43.713262 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:43.726138 ignition[942]: INFO : Ignition 2.19.0 Jul 6 23:59:43.726138 ignition[942]: INFO : Stage: mount Jul 6 23:59:43.726138 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:43.726138 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:43.726138 ignition[942]: INFO : mount: mount passed Jul 6 23:59:43.726138 ignition[942]: INFO : Ignition finished successfully Jul 6 23:59:43.727323 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:59:43.727551 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:59:43.732451 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:59:43.782444 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:59:43.787337 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:59:43.794262 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (956) Jul 6 23:59:43.796612 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:43.796629 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:43.796638 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:59:43.800243 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:59:43.800943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:59:43.814023 ignition[973]: INFO : Ignition 2.19.0 Jul 6 23:59:43.814023 ignition[973]: INFO : Stage: files Jul 6 23:59:43.814513 ignition[973]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:43.814513 ignition[973]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:43.814738 ignition[973]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:59:43.815136 ignition[973]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:59:43.815136 ignition[973]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:59:43.817004 ignition[973]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:59:43.817133 ignition[973]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:59:43.817292 unknown[973]: wrote ssh authorized keys file for user: core Jul 6 23:59:43.817477 ignition[973]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:59:43.823044 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:59:43.823404 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 6 23:59:44.605615 systemd-networkd[803]: ens192: Gained IPv6LL Jul 6 23:59:48.857873 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:59:48.999177 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:59:48.999177 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:59:48.999751 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:49.001475 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 6 23:59:49.987644 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:59:50.223615 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:50.223615 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 6 23:59:50.223615 ignition[973]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:59:50.223615 ignition[973]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:59:50.261822 ignition[973]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:59:50.264585 ignition[973]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:59:50.264765 ignition[973]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:59:50.264765 ignition[973]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:59:50.264765 ignition[973]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:59:50.265214 ignition[973]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:59:50.265214 ignition[973]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:59:50.265214 ignition[973]: INFO : files: files passed Jul 6 23:59:50.265214 ignition[973]: INFO : Ignition finished successfully Jul 6 23:59:50.265642 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:59:50.270349 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:59:50.271317 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:59:50.272488 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:59:50.273257 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:59:50.280288 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:50.280288 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:50.280740 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:50.281575 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:59:50.281944 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:59:50.285322 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:59:50.298699 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:59:50.298769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:59:50.299066 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:59:50.299202 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:59:50.299411 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:59:50.299888 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:59:50.309386 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:59:50.314361 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:59:50.320974 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:50.321332 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:50.321713 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:59:50.322060 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:59:50.322284 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:59:50.322767 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:59:50.323122 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:59:50.323407 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:59:50.323726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:59:50.324068 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:59:50.324564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:59:50.324878 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:59:50.325220 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:59:50.325521 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:59:50.325846 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:59:50.326074 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:59:50.326145 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:59:50.326707 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:50.326878 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:50.327315 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:59:50.327367 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:50.327657 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:59:50.327727 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:59:50.328258 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:59:50.328331 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:59:50.328780 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:59:50.329032 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:59:50.331252 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:50.331416 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:59:50.331682 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:59:50.331858 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:59:50.331909 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:59:50.332066 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:59:50.332113 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:59:50.332289 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:59:50.332348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:59:50.332597 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:59:50.332663 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:59:50.344343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:59:50.345885 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:59:50.347431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:59:50.347659 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:50.347986 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:59:50.348190 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:59:50.352740 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:59:50.352891 ignition[1027]: INFO : Ignition 2.19.0 Jul 6 23:59:50.352891 ignition[1027]: INFO : Stage: umount Jul 6 23:59:50.353259 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:50.353259 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:59:50.353400 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:59:50.355723 ignition[1027]: INFO : umount: umount passed Jul 6 23:59:50.355723 ignition[1027]: INFO : Ignition finished successfully Jul 6 23:59:50.356718 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:59:50.356910 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:59:50.357339 systemd[1]: Stopped target network.target - Network. Jul 6 23:59:50.357547 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:59:50.357672 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:59:50.357889 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:59:50.357912 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:59:50.358272 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:59:50.358296 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:59:50.358507 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:59:50.358529 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:59:50.358810 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:59:50.359195 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:59:50.361963 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:59:50.362023 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:59:50.362361 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:59:50.362384 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:50.366331 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:59:50.366430 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:59:50.366458 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:59:50.366584 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 6 23:59:50.366606 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 6 23:59:50.366768 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:50.371602 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:59:50.371816 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:59:50.372497 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:59:50.372984 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:59:50.373457 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:59:50.373496 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:50.373860 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:59:50.373883 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:50.374171 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:59:50.374196 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:50.376656 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:59:50.376739 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:50.377013 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:59:50.377040 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:50.377291 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:59:50.377379 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:50.377548 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:59:50.377570 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:59:50.377836 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:59:50.377858 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:59:50.378137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:59:50.378159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:50.382382 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:59:50.382492 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:59:50.382520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:50.382811 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:59:50.382835 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:50.382953 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:59:50.382975 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:50.383098 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:59:50.383128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:50.383799 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:59:50.385605 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:59:50.386409 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:59:50.634561 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:59:50.634863 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:59:50.635306 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:59:50.635607 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:59:50.635806 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:59:50.640422 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:59:50.651999 systemd[1]: Switching root. Jul 6 23:59:50.686221 systemd-journald[215]: Journal stopped Jul 6 23:59:52.111776 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Jul 6 23:59:52.111802 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:59:52.111810 kernel: SELinux: policy capability open_perms=1 Jul 6 23:59:52.111816 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:59:52.111821 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:59:52.111826 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:59:52.111835 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:59:52.111841 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:59:52.111847 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:59:52.111857 kernel: audit: type=1403 audit(1751846391.378:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:59:52.111869 systemd[1]: Successfully loaded SELinux policy in 33.835ms. Jul 6 23:59:52.111881 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.686ms. Jul 6 23:59:52.111894 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:59:52.111909 systemd[1]: Detected virtualization vmware. Jul 6 23:59:52.111917 systemd[1]: Detected architecture x86-64. Jul 6 23:59:52.111923 systemd[1]: Detected first boot. Jul 6 23:59:52.111930 systemd[1]: Initializing machine ID from random generator. Jul 6 23:59:52.111938 zram_generator::config[1070]: No configuration found. Jul 6 23:59:52.111945 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:59:52.111952 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 6 23:59:52.111959 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 6 23:59:52.111966 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:59:52.111972 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:59:52.111979 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:59:52.111986 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:59:52.111993 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:59:52.112000 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:59:52.112007 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:59:52.112013 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:59:52.112020 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:59:52.112027 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:59:52.112034 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:59:52.112041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:52.112048 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:52.112055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:59:52.112061 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:59:52.112068 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:59:52.112074 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:59:52.112081 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:59:52.112089 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:52.112097 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:59:52.112105 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:59:52.112112 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:59:52.112119 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:59:52.112126 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:52.112133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:59:52.112139 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:59:52.112147 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:59:52.112154 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:59:52.112161 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:59:52.112168 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:52.112175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:52.112183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:52.112190 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:59:52.112197 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:59:52.112204 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:59:52.112211 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:59:52.112218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:52.112224 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:59:52.112242 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:59:52.112254 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:59:52.112261 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:59:52.112269 systemd[1]: Reached target machines.target - Containers. Jul 6 23:59:52.112276 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:59:52.112283 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 6 23:59:52.112290 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:59:52.112297 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:59:52.112304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:59:52.112312 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:59:52.112319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:59:52.112327 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:59:52.112334 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:59:52.112341 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:59:52.112348 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:59:52.112355 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:59:52.112361 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:59:52.112368 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:59:52.112376 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:59:52.112387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:59:52.112399 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:59:52.112409 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:59:52.112427 systemd-journald[1164]: Collecting audit messages is disabled. Jul 6 23:59:52.112447 systemd-journald[1164]: Journal started Jul 6 23:59:52.112468 systemd-journald[1164]: Runtime Journal (/run/log/journal/580624aade2348119c71ff38e042719e) is 4.8M, max 38.6M, 33.8M free. Jul 6 23:59:51.909250 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:59:51.955554 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:59:51.955835 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:59:52.113031 jq[1137]: true Jul 6 23:59:52.117976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:59:52.117997 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:59:52.118008 systemd[1]: Stopped verity-setup.service. Jul 6 23:59:52.121440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:52.121462 kernel: loop: module loaded Jul 6 23:59:52.121472 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:59:52.122006 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:59:52.122196 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:59:52.122386 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:59:52.122552 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:59:52.122736 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:59:52.122909 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:59:52.123162 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:59:52.126247 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:52.126503 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:59:52.126581 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:59:52.126800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:59:52.126871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:59:52.127133 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:59:52.127206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:59:52.127628 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:59:52.133610 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:59:52.133919 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:59:52.134292 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:59:52.137125 jq[1179]: true Jul 6 23:59:52.141885 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:59:52.145462 kernel: fuse: init (API version 7.39) Jul 6 23:59:52.147311 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:59:52.147446 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:59:52.147465 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:59:52.148319 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:59:52.152911 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:59:52.154369 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:59:52.154618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:59:52.165605 kernel: ACPI: bus type drm_connector registered Jul 6 23:59:52.164870 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:59:52.169177 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:59:52.169318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:59:52.171151 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:59:52.171624 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:59:52.180360 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:59:52.181374 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:59:52.185645 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:59:52.189309 systemd-journald[1164]: Time spent on flushing to /var/log/journal/580624aade2348119c71ff38e042719e is 63.180ms for 1827 entries. Jul 6 23:59:52.189309 systemd-journald[1164]: System Journal (/var/log/journal/580624aade2348119c71ff38e042719e) is 8.0M, max 584.8M, 576.8M free. Jul 6 23:59:52.262424 systemd-journald[1164]: Received client request to flush runtime journal. Jul 6 23:59:52.262449 kernel: loop0: detected capacity change from 0 to 229808 Jul 6 23:59:52.186561 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:59:52.186839 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:59:52.186922 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:59:52.187166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:52.187383 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:59:52.187609 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:59:52.206382 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:59:52.210852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:59:52.211612 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:59:52.236357 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:59:52.236622 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:59:52.244567 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:59:52.263726 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:59:52.270095 ignition[1191]: Ignition 2.19.0 Jul 6 23:59:52.270479 ignition[1191]: deleting config from guestinfo properties Jul 6 23:59:52.310254 ignition[1191]: Successfully deleted config Jul 6 23:59:52.311859 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 6 23:59:52.318946 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:59:52.321020 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:52.321485 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:59:52.330326 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jul 6 23:59:52.330346 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jul 6 23:59:52.338218 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:59:52.336692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:52.345457 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:59:52.363066 kernel: loop1: detected capacity change from 0 to 2976 Jul 6 23:59:52.377753 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:52.382311 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:59:52.388968 udevadm[1234]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:59:52.472688 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:59:52.478453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:59:52.479341 kernel: loop2: detected capacity change from 0 to 142488 Jul 6 23:59:52.487781 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jul 6 23:59:52.487971 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jul 6 23:59:52.491065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:52.526251 kernel: loop3: detected capacity change from 0 to 140768 Jul 6 23:59:52.669262 kernel: loop4: detected capacity change from 0 to 229808 Jul 6 23:59:52.724452 kernel: loop5: detected capacity change from 0 to 2976 Jul 6 23:59:52.739255 kernel: loop6: detected capacity change from 0 to 142488 Jul 6 23:59:52.759246 kernel: loop7: detected capacity change from 0 to 140768 Jul 6 23:59:52.796176 (sd-merge)[1242]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 6 23:59:52.796458 (sd-merge)[1242]: Merged extensions into '/usr'. Jul 6 23:59:52.800544 systemd[1]: Reloading requested from client PID 1204 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:59:52.800557 systemd[1]: Reloading... Jul 6 23:59:52.855255 zram_generator::config[1265]: No configuration found. Jul 6 23:59:52.950203 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 6 23:59:52.967774 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:52.995986 systemd[1]: Reloading finished in 195 ms. Jul 6 23:59:53.019394 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:59:53.019706 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:59:53.026451 systemd[1]: Starting ensure-sysext.service... Jul 6 23:59:53.027583 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:59:53.029318 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:53.045076 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:59:53.045479 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:59:53.046036 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:59:53.046067 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jul 6 23:59:53.046414 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jul 6 23:59:53.046494 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jul 6 23:59:53.059654 systemd[1]: Reloading requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:59:53.059665 systemd[1]: Reloading... Jul 6 23:59:53.065705 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:59:53.065710 systemd-tmpfiles[1326]: Skipping /boot Jul 6 23:59:53.070940 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:59:53.070987 systemd-tmpfiles[1326]: Skipping /boot Jul 6 23:59:53.090677 ldconfig[1200]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:59:53.121249 zram_generator::config[1369]: No configuration found. Jul 6 23:59:53.178267 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:59:53.186269 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:59:53.229261 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1367) Jul 6 23:59:53.248977 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 6 23:59:53.267627 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 6 23:59:53.277367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:53.297259 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:59:53.323274 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 6 23:59:53.329624 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:59:53.329763 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 6 23:59:53.330236 systemd[1]: Reloading finished in 270 ms. Jul 6 23:59:53.332125 (udev-worker)[1367]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 6 23:59:53.333274 kernel: Guest personality initialized and is active Jul 6 23:59:53.335340 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 6 23:59:53.335366 kernel: Initialized host personality Jul 6 23:59:53.339742 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:59:53.346112 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:53.346802 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:59:53.347488 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:53.365554 systemd[1]: Finished ensure-sysext.service. Jul 6 23:59:53.367937 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:59:53.377769 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:53.387328 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:59:53.390136 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:59:53.391220 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:59:53.393083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:59:53.394583 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:59:53.395764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:59:53.406385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:59:53.406612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:59:53.407886 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:59:53.409616 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:59:53.411524 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:59:53.414000 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:59:53.417202 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:59:53.426566 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:59:53.430106 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:59:53.438355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:53.438509 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:53.439029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:59:53.439132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:59:53.439441 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:59:53.439523 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:59:53.439749 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:59:53.439866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:59:53.440123 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:59:53.440199 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:59:53.443868 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:59:53.443910 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:59:53.446172 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:59:53.447739 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:59:53.447965 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:53.455801 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:59:53.457266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:59:53.462022 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:59:53.465520 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:59:53.481962 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:59:53.487578 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:59:53.503825 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:59:53.504592 augenrules[1494]: No rules Jul 6 23:59:53.511399 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:59:53.511791 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:59:53.521406 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:59:53.521904 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:59:53.524732 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:59:53.552280 systemd-networkd[1462]: lo: Link UP Jul 6 23:59:53.552286 systemd-networkd[1462]: lo: Gained carrier Jul 6 23:59:53.553328 systemd-networkd[1462]: Enumeration completed Jul 6 23:59:53.553575 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:59:53.553909 systemd-networkd[1462]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 6 23:59:53.555350 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 6 23:59:53.555471 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 6 23:59:53.556220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:53.557554 systemd-networkd[1462]: ens192: Link UP Jul 6 23:59:53.557655 systemd-networkd[1462]: ens192: Gained carrier Jul 6 23:59:53.561441 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:59:53.566897 systemd-resolved[1463]: Positive Trust Anchors: Jul 6 23:59:53.566906 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:59:53.566927 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:59:53.569079 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:59:53.569305 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:59:53.570365 systemd-resolved[1463]: Defaulting to hostname 'linux'. Jul 6 23:59:53.571340 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:59:53.571505 systemd[1]: Reached target network.target - Network. Jul 6 23:59:53.571631 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:53.571778 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:59:53.571924 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:59:53.572118 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:59:53.572317 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:59:53.572519 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:59:53.572666 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:59:53.572773 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:59:53.572787 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:59:53.572956 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:59:53.573513 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:59:53.574576 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:59:53.581458 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:59:53.581950 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:59:53.582112 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:59:53.582303 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:59:53.582414 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:59:53.582427 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:59:53.583208 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:59:53.585361 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:59:53.588775 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:59:53.590829 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:59:53.590948 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:59:53.593750 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:59:53.594059 jq[1514]: false Jul 6 23:59:53.601308 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:59:53.602961 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:59:53.604463 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:59:53.607151 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:59:53.608170 extend-filesystems[1515]: Found loop4 Jul 6 23:59:53.608170 extend-filesystems[1515]: Found loop5 Jul 6 23:59:53.608170 extend-filesystems[1515]: Found loop6 Jul 6 23:59:53.608170 extend-filesystems[1515]: Found loop7 Jul 6 23:59:53.608170 extend-filesystems[1515]: Found sda Jul 6 23:59:53.608170 extend-filesystems[1515]: Found sda1 Jul 6 23:59:53.608170 extend-filesystems[1515]: Found sda2 Jul 6 23:59:53.608170 extend-filesystems[1515]: Found sda3 Jul 6 23:59:53.608170 extend-filesystems[1515]: Found usr Jul 6 23:59:53.608170 extend-filesystems[1515]: Found sda4 Jul 6 23:59:53.608170 extend-filesystems[1515]: Found sda6 Jul 6 23:59:53.608170 extend-filesystems[1515]: Found sda7 Jul 6 23:59:53.607469 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:59:53.622437 dbus-daemon[1513]: [system] SELinux support is enabled Jul 6 23:59:53.624452 extend-filesystems[1515]: Found sda9 Jul 6 23:59:53.624452 extend-filesystems[1515]: Checking size of /dev/sda9 Jul 6 23:59:53.607858 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:59:53.610295 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:59:53.611082 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:59:53.627466 jq[1526]: true Jul 6 23:59:53.619314 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 6 23:59:53.625815 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:59:53.630419 extend-filesystems[1515]: Old size kept for /dev/sda9 Jul 6 23:59:53.630543 extend-filesystems[1515]: Found sr0 Jul 6 23:59:53.632721 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:59:53.632821 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:59:53.632995 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:59:53.633083 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:59:53.633397 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:59:53.633486 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:59:53.635587 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:59:53.635685 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:59:53.639312 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 6 23:59:53.646156 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:59:53.646184 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:59:53.646417 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:59:53.646877 update_engine[1523]: I20250706 23:59:53.646615 1523 main.cc:92] Flatcar Update Engine starting Jul 6 23:59:53.646427 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:59:53.651355 update_engine[1523]: I20250706 23:59:53.651306 1523 update_check_scheduler.cc:74] Next update check in 3m7s Jul 6 23:59:53.653373 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 6 23:59:53.656013 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:59:53.657247 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1366) Jul 6 23:59:53.658371 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:59:53.666250 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:01:19.834387 systemd-timesyncd[1464]: Contacted time server 23.150.41.122:123 (0.flatcar.pool.ntp.org). Jul 7 00:01:19.835685 systemd-timesyncd[1464]: Initial clock synchronization to Mon 2025-07-07 00:01:19.834191 UTC. Jul 7 00:01:19.836371 systemd-resolved[1463]: Clock change detected. Flushing caches. Jul 7 00:01:19.838268 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 7 00:01:19.841203 systemd[1]: Starting mdadm.service - Initiates a check run of an MD array's redundancy information.... Jul 7 00:01:19.842944 jq[1538]: true Jul 7 00:01:19.860251 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 7 00:01:19.862298 tar[1537]: linux-amd64/LICENSE Jul 7 00:01:19.862298 tar[1537]: linux-amd64/helm Jul 7 00:01:19.872267 systemd[1]: logrotate.service: Deactivated successfully. Jul 7 00:01:19.893430 unknown[1539]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 7 00:01:19.896043 systemd[1]: mdadm.service: Deactivated successfully. Jul 7 00:01:19.896164 systemd[1]: Finished mdadm.service - Initiates a check run of an MD array's redundancy information.. Jul 7 00:01:19.903671 unknown[1539]: Core dump limit set to -1 Jul 7 00:01:19.932498 systemd-logind[1521]: Watching system buttons on /dev/input/event1 (Power Button) Jul 7 00:01:19.938942 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:01:19.939036 systemd-logind[1521]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:01:19.940586 systemd-logind[1521]: New seat seat0. Jul 7 00:01:19.945556 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:01:19.963481 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:01:19.962732 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:01:19.963316 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 00:01:20.028626 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:01:20.050392 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:01:20.062701 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:01:20.072567 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:01:20.081590 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:01:20.081717 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:01:20.089647 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:01:20.105884 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:01:20.111362 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:01:20.113452 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:01:20.114236 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:01:20.133977 containerd[1551]: time="2025-07-07T00:01:20.133836267Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 00:01:20.160134 containerd[1551]: time="2025-07-07T00:01:20.160045816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161134 containerd[1551]: time="2025-07-07T00:01:20.161012381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161134 containerd[1551]: time="2025-07-07T00:01:20.161037162Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 00:01:20.161134 containerd[1551]: time="2025-07-07T00:01:20.161049130Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 00:01:20.161577 containerd[1551]: time="2025-07-07T00:01:20.161559686Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 00:01:20.161605 containerd[1551]: time="2025-07-07T00:01:20.161579437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161643 containerd[1551]: time="2025-07-07T00:01:20.161626232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161643 containerd[1551]: time="2025-07-07T00:01:20.161639738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161776 containerd[1551]: time="2025-07-07T00:01:20.161757994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161798 containerd[1551]: time="2025-07-07T00:01:20.161774757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161798 containerd[1551]: time="2025-07-07T00:01:20.161786583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161798 containerd[1551]: time="2025-07-07T00:01:20.161795180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161852 containerd[1551]: time="2025-07-07T00:01:20.161842239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:01:20.161994 containerd[1551]: time="2025-07-07T00:01:20.161979082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:01:20.162073 containerd[1551]: time="2025-07-07T00:01:20.162055304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:01:20.162095 containerd[1551]: time="2025-07-07T00:01:20.162072667Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 00:01:20.163296 containerd[1551]: time="2025-07-07T00:01:20.163166231Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 00:01:20.163296 containerd[1551]: time="2025-07-07T00:01:20.163210369Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:01:20.164596 containerd[1551]: time="2025-07-07T00:01:20.164581628Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 00:01:20.164659 containerd[1551]: time="2025-07-07T00:01:20.164647446Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 00:01:20.164721 containerd[1551]: time="2025-07-07T00:01:20.164712718Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 00:01:20.164767 containerd[1551]: time="2025-07-07T00:01:20.164756897Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 00:01:20.164818 containerd[1551]: time="2025-07-07T00:01:20.164807455Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 00:01:20.164943 containerd[1551]: time="2025-07-07T00:01:20.164912495Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165111714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165220934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165236931Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165249155Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165262748Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165273545Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165283520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165293316Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165301229Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165310955Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165318584Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165325473Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165341733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165529 containerd[1551]: time="2025-07-07T00:01:20.165352691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165360489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165372341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165379835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165387515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165397777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165409851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165428056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165437639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165444245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165451137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165460571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165469224Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165485558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165494067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.165764 containerd[1551]: time="2025-07-07T00:01:20.165500034Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 00:01:20.166037 containerd[1551]: time="2025-07-07T00:01:20.166026489Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 00:01:20.166124 containerd[1551]: time="2025-07-07T00:01:20.166106179Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 00:01:20.166690 containerd[1551]: time="2025-07-07T00:01:20.166158944Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 00:01:20.166690 containerd[1551]: time="2025-07-07T00:01:20.166170632Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 00:01:20.166690 containerd[1551]: time="2025-07-07T00:01:20.166178686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.166690 containerd[1551]: time="2025-07-07T00:01:20.166187370Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 00:01:20.166690 containerd[1551]: time="2025-07-07T00:01:20.166196293Z" level=info msg="NRI interface is disabled by configuration." Jul 7 00:01:20.166690 containerd[1551]: time="2025-07-07T00:01:20.166206626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 00:01:20.166790 containerd[1551]: time="2025-07-07T00:01:20.166373782Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 00:01:20.166790 containerd[1551]: time="2025-07-07T00:01:20.166409935Z" level=info msg="Connect containerd service" Jul 7 00:01:20.166790 containerd[1551]: time="2025-07-07T00:01:20.166429679Z" level=info msg="using legacy CRI server" Jul 7 00:01:20.166790 containerd[1551]: time="2025-07-07T00:01:20.166435129Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:01:20.166790 containerd[1551]: time="2025-07-07T00:01:20.166491422Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 00:01:20.167178 containerd[1551]: time="2025-07-07T00:01:20.167161458Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:01:20.167323 containerd[1551]: time="2025-07-07T00:01:20.167301956Z" level=info msg="Start subscribing containerd event" Jul 7 00:01:20.167373 containerd[1551]: time="2025-07-07T00:01:20.167361281Z" level=info msg="Start recovering state" Jul 7 00:01:20.167627 containerd[1551]: time="2025-07-07T00:01:20.167615814Z" level=info msg="Start event monitor" Jul 7 00:01:20.167678 containerd[1551]: time="2025-07-07T00:01:20.167669109Z" level=info msg="Start snapshots syncer" Jul 7 00:01:20.167711 containerd[1551]: time="2025-07-07T00:01:20.167704931Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:01:20.167747 containerd[1551]: time="2025-07-07T00:01:20.167740853Z" level=info msg="Start streaming server" Jul 7 00:01:20.167826 containerd[1551]: time="2025-07-07T00:01:20.167571448Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:01:20.167897 containerd[1551]: time="2025-07-07T00:01:20.167888625Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:01:20.168008 containerd[1551]: time="2025-07-07T00:01:20.168000109Z" level=info msg="containerd successfully booted in 0.034758s" Jul 7 00:01:20.168055 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:01:20.341230 tar[1537]: linux-amd64/README.md Jul 7 00:01:20.349681 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:01:21.204213 systemd-networkd[1462]: ens192: Gained IPv6LL Jul 7 00:01:21.205642 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:01:21.206083 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:01:21.210293 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 7 00:01:21.215559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:21.218814 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:01:21.235382 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:01:21.243237 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 00:01:21.243375 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 7 00:01:21.243759 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:01:22.720983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:22.721493 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:01:22.722049 systemd[1]: Startup finished in 975ms (kernel) + 10.751s (initrd) + 5.209s (userspace) = 16.936s. Jul 7 00:01:22.725979 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:22.820665 login[1630]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:01:22.822368 login[1633]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:01:22.829770 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:01:22.835313 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:01:22.838320 systemd-logind[1521]: New session 1 of user core. Jul 7 00:01:22.841585 systemd-logind[1521]: New session 2 of user core. Jul 7 00:01:22.845346 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:01:22.851319 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:01:22.856134 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:01:22.937852 systemd[1705]: Queued start job for default target default.target. Jul 7 00:01:22.943241 systemd[1705]: Created slice app.slice - User Application Slice. Jul 7 00:01:22.943266 systemd[1705]: Reached target paths.target - Paths. Jul 7 00:01:22.943279 systemd[1705]: Reached target timers.target - Timers. Jul 7 00:01:22.944187 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:01:22.952623 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:01:22.952660 systemd[1705]: Reached target sockets.target - Sockets. Jul 7 00:01:22.952670 systemd[1705]: Reached target basic.target - Basic System. Jul 7 00:01:22.952693 systemd[1705]: Reached target default.target - Main User Target. Jul 7 00:01:22.952710 systemd[1705]: Startup finished in 93ms. Jul 7 00:01:22.952805 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:01:22.953858 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:01:22.955224 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:01:23.749593 kubelet[1698]: E0707 00:01:23.749546 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:23.751353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:23.751470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:34.001716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:01:34.009272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:34.322162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:34.324691 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:34.355436 kubelet[1749]: E0707 00:01:34.355401 1749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:34.357611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:34.357707 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:44.607992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:01:44.616247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:44.953889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:44.956465 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:44.999173 kubelet[1764]: E0707 00:01:44.999109 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:45.000544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:45.000633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:50.061456 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:01:50.062686 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.68.195:60640.service - OpenSSH per-connection server daemon (139.178.68.195:60640). Jul 7 00:01:50.094181 sshd[1772]: Accepted publickey for core from 139.178.68.195 port 60640 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:01:50.095038 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:50.098769 systemd-logind[1521]: New session 3 of user core. Jul 7 00:01:50.106350 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:01:50.169179 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.68.195:60654.service - OpenSSH per-connection server daemon (139.178.68.195:60654). Jul 7 00:01:50.194716 sshd[1777]: Accepted publickey for core from 139.178.68.195 port 60654 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:01:50.195636 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:50.198482 systemd-logind[1521]: New session 4 of user core. Jul 7 00:01:50.207225 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:01:50.258040 sshd[1777]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:50.272270 systemd[1]: sshd@1-139.178.70.105:22-139.178.68.195:60654.service: Deactivated successfully. Jul 7 00:01:50.273560 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:01:50.274754 systemd-logind[1521]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:01:50.278298 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.68.195:60668.service - OpenSSH per-connection server daemon (139.178.68.195:60668). Jul 7 00:01:50.279152 systemd-logind[1521]: Removed session 4. Jul 7 00:01:50.303376 sshd[1784]: Accepted publickey for core from 139.178.68.195 port 60668 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:01:50.304390 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:50.307009 systemd-logind[1521]: New session 5 of user core. Jul 7 00:01:50.316216 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:01:50.363012 sshd[1784]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:50.371902 systemd[1]: sshd@2-139.178.70.105:22-139.178.68.195:60668.service: Deactivated successfully. Jul 7 00:01:50.372842 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:01:50.373706 systemd-logind[1521]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:01:50.378363 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.68.195:60670.service - OpenSSH per-connection server daemon (139.178.68.195:60670). Jul 7 00:01:50.381257 systemd-logind[1521]: Removed session 5. Jul 7 00:01:50.401193 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 60670 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:01:50.401892 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:50.404547 systemd-logind[1521]: New session 6 of user core. Jul 7 00:01:50.406226 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:01:50.455986 sshd[1791]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:50.466903 systemd[1]: sshd@3-139.178.70.105:22-139.178.68.195:60670.service: Deactivated successfully. Jul 7 00:01:50.467776 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:01:50.468750 systemd-logind[1521]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:01:50.474477 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.68.195:60674.service - OpenSSH per-connection server daemon (139.178.68.195:60674). Jul 7 00:01:50.477301 systemd-logind[1521]: Removed session 6. Jul 7 00:01:50.500772 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 60674 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:01:50.501835 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:50.505160 systemd-logind[1521]: New session 7 of user core. Jul 7 00:01:50.512212 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:01:50.568983 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:01:50.569213 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:50.578469 sudo[1801]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:50.579886 sshd[1798]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:50.587832 systemd[1]: sshd@4-139.178.70.105:22-139.178.68.195:60674.service: Deactivated successfully. Jul 7 00:01:50.588815 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:01:50.589746 systemd-logind[1521]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:01:50.594275 systemd[1]: Started sshd@5-139.178.70.105:22-139.178.68.195:60690.service - OpenSSH per-connection server daemon (139.178.68.195:60690). Jul 7 00:01:50.594652 systemd-logind[1521]: Removed session 7. Jul 7 00:01:50.617587 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 60690 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:01:50.618447 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:50.620904 systemd-logind[1521]: New session 8 of user core. Jul 7 00:01:50.629201 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:01:50.678603 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:01:50.679026 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:50.681322 sudo[1810]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:50.684965 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 00:01:50.685176 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:50.694447 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 00:01:50.695983 auditctl[1813]: No rules Jul 7 00:01:50.696260 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:01:50.696410 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 00:01:50.698169 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:01:50.717134 augenrules[1831]: No rules Jul 7 00:01:50.718060 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:01:50.718920 sudo[1809]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:50.719858 sshd[1806]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:50.723568 systemd[1]: sshd@5-139.178.70.105:22-139.178.68.195:60690.service: Deactivated successfully. Jul 7 00:01:50.723587 systemd-logind[1521]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:01:50.724508 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:01:50.725870 systemd[1]: Started sshd@6-139.178.70.105:22-139.178.68.195:60706.service - OpenSSH per-connection server daemon (139.178.68.195:60706). Jul 7 00:01:50.726222 systemd-logind[1521]: Removed session 8. Jul 7 00:01:50.751641 sshd[1839]: Accepted publickey for core from 139.178.68.195 port 60706 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:01:50.752465 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:50.755042 systemd-logind[1521]: New session 9 of user core. Jul 7 00:01:50.763218 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:01:50.812021 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:01:50.812454 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:51.167250 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:01:51.167325 (dockerd)[1858]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:01:51.524227 dockerd[1858]: time="2025-07-07T00:01:51.523963576Z" level=info msg="Starting up" Jul 7 00:01:51.669714 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4156823271-merged.mount: Deactivated successfully. Jul 7 00:01:51.688874 dockerd[1858]: time="2025-07-07T00:01:51.688847503Z" level=info msg="Loading containers: start." Jul 7 00:01:51.799249 kernel: Initializing XFRM netlink socket Jul 7 00:01:51.871982 systemd-networkd[1462]: docker0: Link UP Jul 7 00:01:51.891912 dockerd[1858]: time="2025-07-07T00:01:51.891896624Z" level=info msg="Loading containers: done." Jul 7 00:01:51.905888 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2307874466-merged.mount: Deactivated successfully. Jul 7 00:01:51.921737 dockerd[1858]: time="2025-07-07T00:01:51.921711122Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:01:51.921984 dockerd[1858]: time="2025-07-07T00:01:51.921869700Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 00:01:51.922037 dockerd[1858]: time="2025-07-07T00:01:51.921973890Z" level=info msg="Daemon has completed initialization" Jul 7 00:01:51.956985 dockerd[1858]: time="2025-07-07T00:01:51.956868625Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:01:51.957337 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:01:52.677628 containerd[1551]: time="2025-07-07T00:01:52.677595410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 00:01:53.224979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128543262.mount: Deactivated successfully. Jul 7 00:01:54.154548 containerd[1551]: time="2025-07-07T00:01:54.154520234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:54.155319 containerd[1551]: time="2025-07-07T00:01:54.155298768Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 7 00:01:54.155767 containerd[1551]: time="2025-07-07T00:01:54.155751038Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:54.157267 containerd[1551]: time="2025-07-07T00:01:54.157093790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:54.158042 containerd[1551]: time="2025-07-07T00:01:54.157697554Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.480075194s" Jul 7 00:01:54.158042 containerd[1551]: time="2025-07-07T00:01:54.157720621Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 00:01:54.158113 containerd[1551]: time="2025-07-07T00:01:54.158090945Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 00:01:55.100460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 00:01:55.106226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:55.171526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:55.173921 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:55.195797 kubelet[2066]: E0707 00:01:55.195774 2066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:55.197414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:55.197504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:55.515455 containerd[1551]: time="2025-07-07T00:01:55.515374873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:55.525047 containerd[1551]: time="2025-07-07T00:01:55.525000051Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 7 00:01:55.535125 containerd[1551]: time="2025-07-07T00:01:55.535084964Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:55.545484 containerd[1551]: time="2025-07-07T00:01:55.545452244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:55.546323 containerd[1551]: time="2025-07-07T00:01:55.546240521Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.388121823s" Jul 7 00:01:55.546323 containerd[1551]: time="2025-07-07T00:01:55.546264181Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 00:01:55.546706 containerd[1551]: time="2025-07-07T00:01:55.546673772Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 00:01:57.195310 containerd[1551]: time="2025-07-07T00:01:57.195273930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:57.200726 containerd[1551]: time="2025-07-07T00:01:57.200574971Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 7 00:01:57.208684 containerd[1551]: time="2025-07-07T00:01:57.208656679Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:57.216083 containerd[1551]: time="2025-07-07T00:01:57.216058597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:57.216608 containerd[1551]: time="2025-07-07T00:01:57.216589978Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.669816746s" Jul 7 00:01:57.216638 containerd[1551]: time="2025-07-07T00:01:57.216608911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 00:01:57.217195 containerd[1551]: time="2025-07-07T00:01:57.217099144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 00:01:58.121370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103196096.mount: Deactivated successfully. Jul 7 00:01:58.483178 containerd[1551]: time="2025-07-07T00:01:58.482685672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.489167 containerd[1551]: time="2025-07-07T00:01:58.489139430Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 7 00:01:58.497163 containerd[1551]: time="2025-07-07T00:01:58.497148748Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.501097 containerd[1551]: time="2025-07-07T00:01:58.501058527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.501530 containerd[1551]: time="2025-07-07T00:01:58.501440221Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.2843237s" Jul 7 00:01:58.501530 containerd[1551]: time="2025-07-07T00:01:58.501457394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 00:01:58.501817 containerd[1551]: time="2025-07-07T00:01:58.501757302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 00:01:58.990688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818846559.mount: Deactivated successfully. Jul 7 00:01:59.833137 containerd[1551]: time="2025-07-07T00:01:59.833074381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:59.839141 containerd[1551]: time="2025-07-07T00:01:59.839054467Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 7 00:01:59.846751 containerd[1551]: time="2025-07-07T00:01:59.846719831Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:59.854201 containerd[1551]: time="2025-07-07T00:01:59.854166300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:59.855347 containerd[1551]: time="2025-07-07T00:01:59.855172219Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.353293876s" Jul 7 00:01:59.855347 containerd[1551]: time="2025-07-07T00:01:59.855217899Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 00:01:59.855599 containerd[1551]: time="2025-07-07T00:01:59.855575831Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:02:00.370203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302646986.mount: Deactivated successfully. Jul 7 00:02:00.372379 containerd[1551]: time="2025-07-07T00:02:00.372352537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:00.373241 containerd[1551]: time="2025-07-07T00:02:00.373215695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:02:00.373681 containerd[1551]: time="2025-07-07T00:02:00.373662268Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:00.375001 containerd[1551]: time="2025-07-07T00:02:00.374971157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:00.375737 containerd[1551]: time="2025-07-07T00:02:00.375528739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 519.933839ms" Jul 7 00:02:00.375737 containerd[1551]: time="2025-07-07T00:02:00.375549394Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:02:00.376360 containerd[1551]: time="2025-07-07T00:02:00.376196774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 00:02:00.828098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206367641.mount: Deactivated successfully. Jul 7 00:02:04.299316 containerd[1551]: time="2025-07-07T00:02:04.299277517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:04.299951 containerd[1551]: time="2025-07-07T00:02:04.299910171Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 7 00:02:04.299951 containerd[1551]: time="2025-07-07T00:02:04.299924160Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:04.301688 containerd[1551]: time="2025-07-07T00:02:04.301657326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:04.302403 containerd[1551]: time="2025-07-07T00:02:04.302385251Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.926170479s" Jul 7 00:02:04.302435 containerd[1551]: time="2025-07-07T00:02:04.302404514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 00:02:04.789874 update_engine[1523]: I20250707 00:02:04.789508 1523 update_attempter.cc:509] Updating boot flags... Jul 7 00:02:04.817136 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2227) Jul 7 00:02:04.874151 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2229) Jul 7 00:02:05.350515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 00:02:05.358941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:05.755860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:05.758644 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:02:05.802712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:02:05.833596 kubelet[2245]: E0707 00:02:05.801260 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:02:05.802803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:02:06.581095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:06.590412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:06.608426 systemd[1]: Reloading requested from client PID 2259 ('systemctl') (unit session-9.scope)... Jul 7 00:02:06.608511 systemd[1]: Reloading... Jul 7 00:02:06.667160 zram_generator::config[2297]: No configuration found. Jul 7 00:02:06.732828 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 7 00:02:06.747733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:02:06.791752 systemd[1]: Reloading finished in 182 ms. Jul 7 00:02:06.832986 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:02:06.833041 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:02:06.833210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:06.838387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:07.103629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:07.106966 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:02:07.163912 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:02:07.163912 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:02:07.163912 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:02:07.174064 kubelet[2364]: I0707 00:02:07.174038 2364 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:02:07.599701 kubelet[2364]: I0707 00:02:07.599618 2364 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:02:07.599701 kubelet[2364]: I0707 00:02:07.599643 2364 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:02:07.600191 kubelet[2364]: I0707 00:02:07.600174 2364 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:02:07.771709 kubelet[2364]: I0707 00:02:07.771609 2364 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:02:07.774841 kubelet[2364]: E0707 00:02:07.774795 2364 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 00:02:07.791063 kubelet[2364]: E0707 00:02:07.791040 2364 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:02:07.791063 kubelet[2364]: I0707 00:02:07.791064 2364 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:02:07.795983 kubelet[2364]: I0707 00:02:07.795921 2364 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:02:07.799439 kubelet[2364]: I0707 00:02:07.799419 2364 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:02:07.801918 kubelet[2364]: I0707 00:02:07.799439 2364 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:02:07.807388 kubelet[2364]: I0707 00:02:07.807374 2364 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:02:07.807416 kubelet[2364]: I0707 00:02:07.807390 2364 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:02:07.807485 kubelet[2364]: I0707 00:02:07.807473 2364 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:02:07.809220 kubelet[2364]: I0707 00:02:07.809209 2364 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:02:07.809220 kubelet[2364]: I0707 00:02:07.809222 2364 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:02:07.809729 kubelet[2364]: I0707 00:02:07.809717 2364 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:02:07.809757 kubelet[2364]: I0707 00:02:07.809732 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:02:07.812976 kubelet[2364]: E0707 00:02:07.812958 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 00:02:07.814626 kubelet[2364]: E0707 00:02:07.814459 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 00:02:07.814728 kubelet[2364]: I0707 00:02:07.814719 2364 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:02:07.815048 kubelet[2364]: I0707 00:02:07.815039 2364 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:02:07.816303 kubelet[2364]: W0707 00:02:07.815651 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:02:07.818780 kubelet[2364]: I0707 00:02:07.818769 2364 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:02:07.818856 kubelet[2364]: I0707 00:02:07.818849 2364 server.go:1289] "Started kubelet" Jul 7 00:02:07.829107 kubelet[2364]: I0707 00:02:07.829095 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:02:07.829566 kubelet[2364]: E0707 00:02:07.823724 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcf22a7a48a2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 00:02:07.81882833 +0000 UTC m=+0.709863735,LastTimestamp:2025-07-07 00:02:07.81882833 +0000 UTC m=+0.709863735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 00:02:07.831818 kubelet[2364]: I0707 00:02:07.831787 2364 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:02:07.834827 kubelet[2364]: I0707 00:02:07.834448 2364 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:02:07.834827 kubelet[2364]: E0707 00:02:07.834702 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:02:07.838620 kubelet[2364]: I0707 00:02:07.838608 2364 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:02:07.838921 kubelet[2364]: I0707 00:02:07.838887 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:02:07.839092 kubelet[2364]: I0707 00:02:07.839073 2364 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:02:07.839403 kubelet[2364]: I0707 00:02:07.839395 2364 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:02:07.839536 kubelet[2364]: I0707 00:02:07.839529 2364 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:02:07.844778 kubelet[2364]: I0707 00:02:07.844756 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:02:07.845101 kubelet[2364]: E0707 00:02:07.845071 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 00:02:07.845162 kubelet[2364]: E0707 00:02:07.845152 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Jul 7 00:02:07.847105 kubelet[2364]: I0707 00:02:07.846837 2364 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:02:07.847105 kubelet[2364]: I0707 00:02:07.846847 2364 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:02:07.847859 kubelet[2364]: I0707 00:02:07.847840 2364 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:02:07.849445 kubelet[2364]: I0707 00:02:07.849422 2364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:02:07.850905 kubelet[2364]: E0707 00:02:07.848058 2364 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:02:07.853138 kubelet[2364]: I0707 00:02:07.853010 2364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:02:07.853138 kubelet[2364]: I0707 00:02:07.853028 2364 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:02:07.853138 kubelet[2364]: I0707 00:02:07.853048 2364 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:02:07.853138 kubelet[2364]: I0707 00:02:07.853055 2364 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:02:07.853138 kubelet[2364]: E0707 00:02:07.853078 2364 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:02:07.856491 kubelet[2364]: E0707 00:02:07.856472 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 00:02:07.868763 kubelet[2364]: I0707 00:02:07.868714 2364 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:02:07.868763 kubelet[2364]: I0707 00:02:07.868744 2364 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:02:07.868763 kubelet[2364]: I0707 00:02:07.868761 2364 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:02:07.872011 kubelet[2364]: I0707 00:02:07.871997 2364 policy_none.go:49] "None policy: Start" Jul 7 00:02:07.872064 kubelet[2364]: I0707 00:02:07.872016 2364 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:02:07.872064 kubelet[2364]: I0707 00:02:07.872028 2364 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:02:07.876169 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:02:07.882900 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:02:07.885101 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:02:07.900065 kubelet[2364]: E0707 00:02:07.899792 2364 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:02:07.900065 kubelet[2364]: I0707 00:02:07.899927 2364 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:02:07.900065 kubelet[2364]: I0707 00:02:07.899945 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:02:07.900195 kubelet[2364]: I0707 00:02:07.900103 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:02:07.900703 kubelet[2364]: E0707 00:02:07.900695 2364 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:02:07.901267 kubelet[2364]: E0707 00:02:07.901258 2364 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 00:02:07.972133 systemd[1]: Created slice kubepods-burstable-pod32d049bf13cbcc434a71de935f9c2839.slice - libcontainer container kubepods-burstable-pod32d049bf13cbcc434a71de935f9c2839.slice. Jul 7 00:02:07.975553 kubelet[2364]: E0707 00:02:07.975537 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:02:07.987345 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 7 00:02:07.988529 kubelet[2364]: E0707 00:02:07.988514 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:02:08.000654 kubelet[2364]: I0707 00:02:08.000429 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:02:08.000654 kubelet[2364]: E0707 00:02:08.000612 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 7 00:02:08.007605 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 7 00:02:08.008785 kubelet[2364]: E0707 00:02:08.008770 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:02:08.041033 kubelet[2364]: I0707 00:02:08.041011 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32d049bf13cbcc434a71de935f9c2839-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32d049bf13cbcc434a71de935f9c2839\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:08.041033 kubelet[2364]: I0707 00:02:08.041033 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32d049bf13cbcc434a71de935f9c2839-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32d049bf13cbcc434a71de935f9c2839\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:08.041221 kubelet[2364]: I0707 00:02:08.041045 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32d049bf13cbcc434a71de935f9c2839-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32d049bf13cbcc434a71de935f9c2839\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:08.041221 kubelet[2364]: I0707 00:02:08.041060 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:08.041221 kubelet[2364]: I0707 00:02:08.041071 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:08.041221 kubelet[2364]: I0707 00:02:08.041080 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:08.041221 kubelet[2364]: I0707 00:02:08.041088 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:08.041306 kubelet[2364]: I0707 00:02:08.041099 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:08.041306 kubelet[2364]: I0707 00:02:08.041109 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:02:08.046315 kubelet[2364]: E0707 00:02:08.046292 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Jul 7 00:02:08.202600 kubelet[2364]: I0707 00:02:08.202450 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:02:08.203357 kubelet[2364]: E0707 00:02:08.202712 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 7 00:02:08.279765 containerd[1551]: time="2025-07-07T00:02:08.279727778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32d049bf13cbcc434a71de935f9c2839,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:08.296588 containerd[1551]: time="2025-07-07T00:02:08.296525757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:08.310138 containerd[1551]: time="2025-07-07T00:02:08.309953662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:08.446685 kubelet[2364]: E0707 00:02:08.446651 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Jul 7 00:02:08.604496 kubelet[2364]: I0707 00:02:08.604254 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:02:08.604574 kubelet[2364]: E0707 00:02:08.604528 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 7 00:02:08.666389 kubelet[2364]: E0707 00:02:08.666360 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 00:02:08.771423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2795124075.mount: Deactivated successfully. Jul 7 00:02:08.773941 containerd[1551]: time="2025-07-07T00:02:08.773480596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:02:08.774382 containerd[1551]: time="2025-07-07T00:02:08.774313805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 7 00:02:08.774917 containerd[1551]: time="2025-07-07T00:02:08.774902940Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:02:08.775544 containerd[1551]: time="2025-07-07T00:02:08.775524772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:02:08.775807 containerd[1551]: time="2025-07-07T00:02:08.775794206Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:02:08.776038 containerd[1551]: time="2025-07-07T00:02:08.776021100Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:02:08.776366 containerd[1551]: time="2025-07-07T00:02:08.776354506Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:02:08.778039 containerd[1551]: time="2025-07-07T00:02:08.778022023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 481.425796ms" Jul 7 00:02:08.778798 containerd[1551]: time="2025-07-07T00:02:08.778786420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:02:08.779606 containerd[1551]: time="2025-07-07T00:02:08.779593040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.799078ms" Jul 7 00:02:08.781100 containerd[1551]: time="2025-07-07T00:02:08.781087460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 471.097491ms" Jul 7 00:02:08.809598 kubelet[2364]: E0707 00:02:08.809581 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 00:02:08.901019 containerd[1551]: time="2025-07-07T00:02:08.900614480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:08.901019 containerd[1551]: time="2025-07-07T00:02:08.900656326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:08.901356 containerd[1551]: time="2025-07-07T00:02:08.900667181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.901356 containerd[1551]: time="2025-07-07T00:02:08.900784450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.901720 containerd[1551]: time="2025-07-07T00:02:08.901646209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:08.901720 containerd[1551]: time="2025-07-07T00:02:08.901676609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:08.901720 containerd[1551]: time="2025-07-07T00:02:08.901688100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.903608 containerd[1551]: time="2025-07-07T00:02:08.903191833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.905674 containerd[1551]: time="2025-07-07T00:02:08.905620546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:08.905732 containerd[1551]: time="2025-07-07T00:02:08.905654345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:08.905732 containerd[1551]: time="2025-07-07T00:02:08.905673126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.905865 containerd[1551]: time="2025-07-07T00:02:08.905720881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:08.919625 systemd[1]: Started cri-containerd-2a5c4748bc53f67d5a2ac44a2eee9d7f159c73f84f2aa335e22bd8ca99e2e88e.scope - libcontainer container 2a5c4748bc53f67d5a2ac44a2eee9d7f159c73f84f2aa335e22bd8ca99e2e88e. Jul 7 00:02:08.922849 systemd[1]: Started cri-containerd-71ab456e007065c6eb7a54ca558cfa72a87a79d89922e48f4f8e2645fecb3637.scope - libcontainer container 71ab456e007065c6eb7a54ca558cfa72a87a79d89922e48f4f8e2645fecb3637. Jul 7 00:02:08.925789 systemd[1]: Started cri-containerd-e14938fd92d57466482ad67bd15e459deae3940a307f43d93048a017c01502d8.scope - libcontainer container e14938fd92d57466482ad67bd15e459deae3940a307f43d93048a017c01502d8. Jul 7 00:02:08.962022 containerd[1551]: time="2025-07-07T00:02:08.961919222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"71ab456e007065c6eb7a54ca558cfa72a87a79d89922e48f4f8e2645fecb3637\"" Jul 7 00:02:08.969934 containerd[1551]: time="2025-07-07T00:02:08.969803749Z" level=info msg="CreateContainer within sandbox \"71ab456e007065c6eb7a54ca558cfa72a87a79d89922e48f4f8e2645fecb3637\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:02:08.973673 containerd[1551]: time="2025-07-07T00:02:08.973505174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32d049bf13cbcc434a71de935f9c2839,Namespace:kube-system,Attempt:0,} returns sandbox id \"e14938fd92d57466482ad67bd15e459deae3940a307f43d93048a017c01502d8\"" Jul 7 00:02:08.978085 containerd[1551]: time="2025-07-07T00:02:08.978064757Z" level=info msg="CreateContainer within sandbox \"e14938fd92d57466482ad67bd15e459deae3940a307f43d93048a017c01502d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:02:08.978692 containerd[1551]: time="2025-07-07T00:02:08.978538261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a5c4748bc53f67d5a2ac44a2eee9d7f159c73f84f2aa335e22bd8ca99e2e88e\"" Jul 7 00:02:08.981833 containerd[1551]: time="2025-07-07T00:02:08.981808462Z" level=info msg="CreateContainer within sandbox \"2a5c4748bc53f67d5a2ac44a2eee9d7f159c73f84f2aa335e22bd8ca99e2e88e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:02:08.990135 containerd[1551]: time="2025-07-07T00:02:08.990097123Z" level=info msg="CreateContainer within sandbox \"71ab456e007065c6eb7a54ca558cfa72a87a79d89922e48f4f8e2645fecb3637\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e77dee1e02ee76de0e38f91f97383eb5685902d1b6880a1f87665c0be26b1308\"" Jul 7 00:02:08.990601 containerd[1551]: time="2025-07-07T00:02:08.990521399Z" level=info msg="StartContainer for \"e77dee1e02ee76de0e38f91f97383eb5685902d1b6880a1f87665c0be26b1308\"" Jul 7 00:02:08.991137 containerd[1551]: time="2025-07-07T00:02:08.991114511Z" level=info msg="CreateContainer within sandbox \"e14938fd92d57466482ad67bd15e459deae3940a307f43d93048a017c01502d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36237e78e8728856c9942bf37ef9c257a1ad6daa6b01c868f93f9fd824705aef\"" Jul 7 00:02:08.991487 containerd[1551]: time="2025-07-07T00:02:08.991471972Z" level=info msg="StartContainer for \"36237e78e8728856c9942bf37ef9c257a1ad6daa6b01c868f93f9fd824705aef\"" Jul 7 00:02:08.994412 containerd[1551]: time="2025-07-07T00:02:08.994369543Z" level=info msg="CreateContainer within sandbox \"2a5c4748bc53f67d5a2ac44a2eee9d7f159c73f84f2aa335e22bd8ca99e2e88e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"441e6ca419e370b5659a9e8830216a60836a4a7b6b8c6f08395e9d76b79ed5ad\"" Jul 7 00:02:08.994573 containerd[1551]: time="2025-07-07T00:02:08.994563249Z" level=info msg="StartContainer for \"441e6ca419e370b5659a9e8830216a60836a4a7b6b8c6f08395e9d76b79ed5ad\"" Jul 7 00:02:09.012271 systemd[1]: Started cri-containerd-e77dee1e02ee76de0e38f91f97383eb5685902d1b6880a1f87665c0be26b1308.scope - libcontainer container e77dee1e02ee76de0e38f91f97383eb5685902d1b6880a1f87665c0be26b1308. Jul 7 00:02:09.013095 kubelet[2364]: E0707 00:02:09.013065 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 00:02:09.015002 systemd[1]: Started cri-containerd-36237e78e8728856c9942bf37ef9c257a1ad6daa6b01c868f93f9fd824705aef.scope - libcontainer container 36237e78e8728856c9942bf37ef9c257a1ad6daa6b01c868f93f9fd824705aef. Jul 7 00:02:09.028217 systemd[1]: Started cri-containerd-441e6ca419e370b5659a9e8830216a60836a4a7b6b8c6f08395e9d76b79ed5ad.scope - libcontainer container 441e6ca419e370b5659a9e8830216a60836a4a7b6b8c6f08395e9d76b79ed5ad. Jul 7 00:02:09.063900 containerd[1551]: time="2025-07-07T00:02:09.063835918Z" level=info msg="StartContainer for \"36237e78e8728856c9942bf37ef9c257a1ad6daa6b01c868f93f9fd824705aef\" returns successfully" Jul 7 00:02:09.065576 containerd[1551]: time="2025-07-07T00:02:09.065559255Z" level=info msg="StartContainer for \"e77dee1e02ee76de0e38f91f97383eb5685902d1b6880a1f87665c0be26b1308\" returns successfully" Jul 7 00:02:09.072226 containerd[1551]: time="2025-07-07T00:02:09.072208568Z" level=info msg="StartContainer for \"441e6ca419e370b5659a9e8830216a60836a4a7b6b8c6f08395e9d76b79ed5ad\" returns successfully" Jul 7 00:02:09.129927 kubelet[2364]: E0707 00:02:09.129901 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 00:02:09.248193 kubelet[2364]: E0707 00:02:09.248108 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Jul 7 00:02:09.405706 kubelet[2364]: I0707 00:02:09.405686 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:02:09.405884 kubelet[2364]: E0707 00:02:09.405870 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 7 00:02:09.741766 kubelet[2364]: E0707 00:02:09.741677 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcf22a7a48a2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 00:02:07.81882833 +0000 UTC m=+0.709863735,LastTimestamp:2025-07-07 00:02:07.81882833 +0000 UTC m=+0.709863735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 00:02:09.868334 kubelet[2364]: E0707 00:02:09.868202 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:02:09.870231 kubelet[2364]: E0707 00:02:09.870214 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:02:09.871085 kubelet[2364]: E0707 00:02:09.871043 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:02:09.962406 kubelet[2364]: E0707 00:02:09.962375 2364 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 00:02:10.339399 kubelet[2364]: E0707 00:02:10.339376 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 00:02:10.873602 kubelet[2364]: E0707 00:02:10.873581 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:02:10.873793 kubelet[2364]: E0707 00:02:10.873784 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:02:11.007262 kubelet[2364]: I0707 00:02:11.007233 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:02:11.565286 kubelet[2364]: E0707 00:02:11.565262 2364 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 00:02:11.674734 kubelet[2364]: I0707 00:02:11.674708 2364 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:02:11.739866 kubelet[2364]: I0707 00:02:11.739836 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:02:11.759972 kubelet[2364]: E0707 00:02:11.759951 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 00:02:11.759972 kubelet[2364]: I0707 00:02:11.759969 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:11.762146 kubelet[2364]: E0707 00:02:11.762010 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:11.762146 kubelet[2364]: I0707 00:02:11.762036 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:11.763678 kubelet[2364]: E0707 00:02:11.763667 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:11.818001 kubelet[2364]: I0707 00:02:11.817802 2364 apiserver.go:52] "Watching apiserver" Jul 7 00:02:11.839885 kubelet[2364]: I0707 00:02:11.839857 2364 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:02:11.873212 kubelet[2364]: I0707 00:02:11.873180 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:11.873512 kubelet[2364]: I0707 00:02:11.873503 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:02:11.874583 kubelet[2364]: E0707 00:02:11.874558 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:11.875335 kubelet[2364]: E0707 00:02:11.875315 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 00:02:13.775508 systemd[1]: Reloading requested from client PID 2646 ('systemctl') (unit session-9.scope)... Jul 7 00:02:13.775524 systemd[1]: Reloading... Jul 7 00:02:13.819156 zram_generator::config[2683]: No configuration found. Jul 7 00:02:13.887095 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 7 00:02:13.902642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:02:13.955990 systemd[1]: Reloading finished in 180 ms. Jul 7 00:02:13.982927 kubelet[2364]: I0707 00:02:13.982707 2364 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:02:13.982735 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:13.996364 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:02:13.996496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:14.001312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:14.395312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:14.404387 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:02:14.563718 kubelet[2751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:02:14.563718 kubelet[2751]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:02:14.563718 kubelet[2751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:02:14.564055 kubelet[2751]: I0707 00:02:14.564002 2751 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:02:14.570473 kubelet[2751]: I0707 00:02:14.570451 2751 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:02:14.570473 kubelet[2751]: I0707 00:02:14.570468 2751 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:02:14.570643 kubelet[2751]: I0707 00:02:14.570632 2751 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:02:14.571497 kubelet[2751]: I0707 00:02:14.571484 2751 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 00:02:14.573450 kubelet[2751]: I0707 00:02:14.573296 2751 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:02:14.576918 kubelet[2751]: E0707 00:02:14.576901 2751 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:02:14.578293 kubelet[2751]: I0707 00:02:14.578211 2751 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:02:14.581700 kubelet[2751]: I0707 00:02:14.580743 2751 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:02:14.581700 kubelet[2751]: I0707 00:02:14.580861 2751 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:02:14.581700 kubelet[2751]: I0707 00:02:14.580874 2751 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:02:14.581700 kubelet[2751]: I0707 00:02:14.581009 2751 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:02:14.581898 kubelet[2751]: I0707 00:02:14.581016 2751 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:02:14.581898 kubelet[2751]: I0707 00:02:14.581045 2751 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:02:14.581898 kubelet[2751]: I0707 00:02:14.581257 2751 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:02:14.581898 kubelet[2751]: I0707 00:02:14.581266 2751 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:02:14.581898 kubelet[2751]: I0707 00:02:14.581281 2751 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:02:14.581898 kubelet[2751]: I0707 00:02:14.581290 2751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:02:14.584804 kubelet[2751]: I0707 00:02:14.584786 2751 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:02:14.585128 kubelet[2751]: I0707 00:02:14.585105 2751 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:02:14.589169 kubelet[2751]: I0707 00:02:14.589137 2751 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:02:14.589303 kubelet[2751]: I0707 00:02:14.589177 2751 server.go:1289] "Started kubelet" Jul 7 00:02:14.589435 kubelet[2751]: I0707 00:02:14.589405 2751 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:02:14.591345 kubelet[2751]: I0707 00:02:14.591145 2751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:02:14.591383 kubelet[2751]: I0707 00:02:14.591374 2751 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:02:14.592948 kubelet[2751]: I0707 00:02:14.592696 2751 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:02:14.596362 kubelet[2751]: I0707 00:02:14.596349 2751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:02:14.606902 kubelet[2751]: I0707 00:02:14.606887 2751 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:02:14.610473 kubelet[2751]: I0707 00:02:14.609868 2751 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:02:14.610473 kubelet[2751]: I0707 00:02:14.609947 2751 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:02:14.610473 kubelet[2751]: I0707 00:02:14.610006 2751 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:02:14.611712 kubelet[2751]: I0707 00:02:14.611563 2751 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:02:14.612369 kubelet[2751]: I0707 00:02:14.612354 2751 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:02:14.612401 kubelet[2751]: I0707 00:02:14.612372 2751 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:02:14.612401 kubelet[2751]: I0707 00:02:14.612385 2751 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:02:14.612401 kubelet[2751]: I0707 00:02:14.612389 2751 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:02:14.612460 kubelet[2751]: E0707 00:02:14.612411 2751 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:02:14.616811 kubelet[2751]: E0707 00:02:14.615520 2751 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:02:14.616811 kubelet[2751]: I0707 00:02:14.615690 2751 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:02:14.616811 kubelet[2751]: I0707 00:02:14.615745 2751 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:02:14.619176 kubelet[2751]: I0707 00:02:14.619157 2751 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:02:14.644522 kubelet[2751]: I0707 00:02:14.644503 2751 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:02:14.644522 kubelet[2751]: I0707 00:02:14.644515 2751 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:02:14.644522 kubelet[2751]: I0707 00:02:14.644526 2751 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:02:14.644711 kubelet[2751]: I0707 00:02:14.644615 2751 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:02:14.644711 kubelet[2751]: I0707 00:02:14.644623 2751 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:02:14.644711 kubelet[2751]: I0707 00:02:14.644634 2751 policy_none.go:49] "None policy: Start" Jul 7 00:02:14.644711 kubelet[2751]: I0707 00:02:14.644639 2751 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:02:14.644711 kubelet[2751]: I0707 00:02:14.644645 2751 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:02:14.644711 kubelet[2751]: I0707 00:02:14.644698 2751 state_mem.go:75] "Updated machine memory state" Jul 7 00:02:14.646786 kubelet[2751]: E0707 00:02:14.646743 2751 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:02:14.648241 kubelet[2751]: I0707 00:02:14.648231 2751 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:02:14.648675 kubelet[2751]: I0707 00:02:14.648240 2751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:02:14.648675 kubelet[2751]: I0707 00:02:14.648401 2751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:02:14.649823 kubelet[2751]: E0707 00:02:14.649812 2751 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:02:14.712997 kubelet[2751]: I0707 00:02:14.712979 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:02:14.713109 kubelet[2751]: I0707 00:02:14.713097 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:14.713201 kubelet[2751]: I0707 00:02:14.713056 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:14.751755 kubelet[2751]: I0707 00:02:14.751734 2751 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:02:14.779231 kubelet[2751]: I0707 00:02:14.779192 2751 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 00:02:14.779468 kubelet[2751]: I0707 00:02:14.779342 2751 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:02:14.911437 kubelet[2751]: I0707 00:02:14.911365 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:14.911683 kubelet[2751]: I0707 00:02:14.911532 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:14.911683 kubelet[2751]: I0707 00:02:14.911559 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:14.911683 kubelet[2751]: I0707 00:02:14.911578 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:14.911683 kubelet[2751]: I0707 00:02:14.911594 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:02:14.911683 kubelet[2751]: I0707 00:02:14.911609 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32d049bf13cbcc434a71de935f9c2839-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32d049bf13cbcc434a71de935f9c2839\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:14.911831 kubelet[2751]: I0707 00:02:14.911624 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32d049bf13cbcc434a71de935f9c2839-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32d049bf13cbcc434a71de935f9c2839\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:14.911831 kubelet[2751]: I0707 00:02:14.911638 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32d049bf13cbcc434a71de935f9c2839-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32d049bf13cbcc434a71de935f9c2839\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:14.911831 kubelet[2751]: I0707 00:02:14.911670 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:02:15.583905 kubelet[2751]: I0707 00:02:15.583288 2751 apiserver.go:52] "Watching apiserver" Jul 7 00:02:15.610741 kubelet[2751]: I0707 00:02:15.610718 2751 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:02:15.633612 kubelet[2751]: I0707 00:02:15.633157 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:02:15.633612 kubelet[2751]: I0707 00:02:15.633314 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:15.644094 kubelet[2751]: E0707 00:02:15.644074 2751 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 00:02:15.644610 kubelet[2751]: E0707 00:02:15.644596 2751 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 00:02:15.650786 kubelet[2751]: I0707 00:02:15.650749 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6507276050000002 podStartE2EDuration="1.650727605s" podCreationTimestamp="2025-07-07 00:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:15.638673247 +0000 UTC m=+1.212101265" watchObservedRunningTime="2025-07-07 00:02:15.650727605 +0000 UTC m=+1.224155618" Jul 7 00:02:15.656620 kubelet[2751]: I0707 00:02:15.656508 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.656497246 podStartE2EDuration="1.656497246s" podCreationTimestamp="2025-07-07 00:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:15.65086953 +0000 UTC m=+1.224297548" watchObservedRunningTime="2025-07-07 00:02:15.656497246 +0000 UTC m=+1.229925258" Jul 7 00:02:15.664697 kubelet[2751]: I0707 00:02:15.664457 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6644459870000001 podStartE2EDuration="1.664445987s" podCreationTimestamp="2025-07-07 00:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:15.657040244 +0000 UTC m=+1.230468254" watchObservedRunningTime="2025-07-07 00:02:15.664445987 +0000 UTC m=+1.237874004" Jul 7 00:02:19.072611 kubelet[2751]: I0707 00:02:19.072559 2751 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:02:19.072960 containerd[1551]: time="2025-07-07T00:02:19.072746683Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:02:19.073674 kubelet[2751]: I0707 00:02:19.073209 2751 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:02:20.152973 systemd[1]: Created slice kubepods-besteffort-pod8e8f7d9e_2cd3_4574_b8ce_29b9a73550bd.slice - libcontainer container kubepods-besteffort-pod8e8f7d9e_2cd3_4574_b8ce_29b9a73550bd.slice. Jul 7 00:02:20.211494 systemd[1]: Created slice kubepods-besteffort-podec21a151_df99_4cba_9af3_549666e443ac.slice - libcontainer container kubepods-besteffort-podec21a151_df99_4cba_9af3_549666e443ac.slice. Jul 7 00:02:20.245378 kubelet[2751]: I0707 00:02:20.245351 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfnw5\" (UniqueName: \"kubernetes.io/projected/ec21a151-df99-4cba-9af3-549666e443ac-kube-api-access-nfnw5\") pod \"tigera-operator-747864d56d-c8gpn\" (UID: \"ec21a151-df99-4cba-9af3-549666e443ac\") " pod="tigera-operator/tigera-operator-747864d56d-c8gpn" Jul 7 00:02:20.245681 kubelet[2751]: I0707 00:02:20.245397 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd-kube-proxy\") pod \"kube-proxy-nk888\" (UID: \"8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd\") " pod="kube-system/kube-proxy-nk888" Jul 7 00:02:20.245681 kubelet[2751]: I0707 00:02:20.245412 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec21a151-df99-4cba-9af3-549666e443ac-var-lib-calico\") pod \"tigera-operator-747864d56d-c8gpn\" (UID: \"ec21a151-df99-4cba-9af3-549666e443ac\") " pod="tigera-operator/tigera-operator-747864d56d-c8gpn" Jul 7 00:02:20.245681 kubelet[2751]: I0707 00:02:20.245423 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd-xtables-lock\") pod \"kube-proxy-nk888\" (UID: \"8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd\") " pod="kube-system/kube-proxy-nk888" Jul 7 00:02:20.245681 kubelet[2751]: I0707 00:02:20.245431 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd-lib-modules\") pod \"kube-proxy-nk888\" (UID: \"8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd\") " pod="kube-system/kube-proxy-nk888" Jul 7 00:02:20.245681 kubelet[2751]: I0707 00:02:20.245440 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxwms\" (UniqueName: \"kubernetes.io/projected/8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd-kube-api-access-mxwms\") pod \"kube-proxy-nk888\" (UID: \"8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd\") " pod="kube-system/kube-proxy-nk888" Jul 7 00:02:20.460054 containerd[1551]: time="2025-07-07T00:02:20.459589094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nk888,Uid:8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:20.472501 containerd[1551]: time="2025-07-07T00:02:20.472188382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:20.472501 containerd[1551]: time="2025-07-07T00:02:20.472262797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:20.472501 containerd[1551]: time="2025-07-07T00:02:20.472293325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:20.472501 containerd[1551]: time="2025-07-07T00:02:20.472362448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:20.490249 systemd[1]: Started cri-containerd-43896f3f220c588f857931094f9c1e6ea414f0458690384aae91497273361434.scope - libcontainer container 43896f3f220c588f857931094f9c1e6ea414f0458690384aae91497273361434. Jul 7 00:02:20.503300 containerd[1551]: time="2025-07-07T00:02:20.503268812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nk888,Uid:8e8f7d9e-2cd3-4574-b8ce-29b9a73550bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"43896f3f220c588f857931094f9c1e6ea414f0458690384aae91497273361434\"" Jul 7 00:02:20.505896 containerd[1551]: time="2025-07-07T00:02:20.505865087Z" level=info msg="CreateContainer within sandbox \"43896f3f220c588f857931094f9c1e6ea414f0458690384aae91497273361434\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:02:20.512792 containerd[1551]: time="2025-07-07T00:02:20.512770083Z" level=info msg="CreateContainer within sandbox \"43896f3f220c588f857931094f9c1e6ea414f0458690384aae91497273361434\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"599f0ad9dc9a4e172d30a10edf34002c35fbfdfb6efa56589836ccd525c22b6e\"" Jul 7 00:02:20.513134 containerd[1551]: time="2025-07-07T00:02:20.513108913Z" level=info msg="StartContainer for \"599f0ad9dc9a4e172d30a10edf34002c35fbfdfb6efa56589836ccd525c22b6e\"" Jul 7 00:02:20.513732 containerd[1551]: time="2025-07-07T00:02:20.513125334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-c8gpn,Uid:ec21a151-df99-4cba-9af3-549666e443ac,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:02:20.526818 containerd[1551]: time="2025-07-07T00:02:20.526769586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:20.526950 containerd[1551]: time="2025-07-07T00:02:20.526810017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:20.526950 containerd[1551]: time="2025-07-07T00:02:20.526818189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:20.527103 containerd[1551]: time="2025-07-07T00:02:20.526861534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:20.540230 systemd[1]: Started cri-containerd-599f0ad9dc9a4e172d30a10edf34002c35fbfdfb6efa56589836ccd525c22b6e.scope - libcontainer container 599f0ad9dc9a4e172d30a10edf34002c35fbfdfb6efa56589836ccd525c22b6e. Jul 7 00:02:20.543161 systemd[1]: Started cri-containerd-31b8cf4baa3a550477075ae1d8cf276c8ebd5c999a98de777f8e6126e5c416c3.scope - libcontainer container 31b8cf4baa3a550477075ae1d8cf276c8ebd5c999a98de777f8e6126e5c416c3. Jul 7 00:02:20.562946 containerd[1551]: time="2025-07-07T00:02:20.562910850Z" level=info msg="StartContainer for \"599f0ad9dc9a4e172d30a10edf34002c35fbfdfb6efa56589836ccd525c22b6e\" returns successfully" Jul 7 00:02:20.580807 containerd[1551]: time="2025-07-07T00:02:20.580786939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-c8gpn,Uid:ec21a151-df99-4cba-9af3-549666e443ac,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"31b8cf4baa3a550477075ae1d8cf276c8ebd5c999a98de777f8e6126e5c416c3\"" Jul 7 00:02:20.582167 containerd[1551]: time="2025-07-07T00:02:20.582042416Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:02:21.356874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2512301774.mount: Deactivated successfully. Jul 7 00:02:21.576554 kubelet[2751]: I0707 00:02:21.576264 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nk888" podStartSLOduration=1.576253784 podStartE2EDuration="1.576253784s" podCreationTimestamp="2025-07-07 00:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:20.648971241 +0000 UTC m=+6.222399259" watchObservedRunningTime="2025-07-07 00:02:21.576253784 +0000 UTC m=+7.149681797" Jul 7 00:02:22.440972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258997920.mount: Deactivated successfully. Jul 7 00:02:22.924209 containerd[1551]: time="2025-07-07T00:02:22.924175392Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:22.924666 containerd[1551]: time="2025-07-07T00:02:22.924615251Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:02:22.924933 containerd[1551]: time="2025-07-07T00:02:22.924916513Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:22.926019 containerd[1551]: time="2025-07-07T00:02:22.925991411Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:22.926483 containerd[1551]: time="2025-07-07T00:02:22.926462360Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.344401177s" Jul 7 00:02:22.926515 containerd[1551]: time="2025-07-07T00:02:22.926483220Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:02:22.935924 containerd[1551]: time="2025-07-07T00:02:22.935896453Z" level=info msg="CreateContainer within sandbox \"31b8cf4baa3a550477075ae1d8cf276c8ebd5c999a98de777f8e6126e5c416c3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:02:22.965104 containerd[1551]: time="2025-07-07T00:02:22.965035255Z" level=info msg="CreateContainer within sandbox \"31b8cf4baa3a550477075ae1d8cf276c8ebd5c999a98de777f8e6126e5c416c3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f7100f8424b4eb653061481503b13b23c4de017e8e4b097df4ca033b402e21d5\"" Jul 7 00:02:22.965533 containerd[1551]: time="2025-07-07T00:02:22.965365541Z" level=info msg="StartContainer for \"f7100f8424b4eb653061481503b13b23c4de017e8e4b097df4ca033b402e21d5\"" Jul 7 00:02:22.988202 systemd[1]: Started cri-containerd-f7100f8424b4eb653061481503b13b23c4de017e8e4b097df4ca033b402e21d5.scope - libcontainer container f7100f8424b4eb653061481503b13b23c4de017e8e4b097df4ca033b402e21d5. Jul 7 00:02:23.002179 containerd[1551]: time="2025-07-07T00:02:23.002155176Z" level=info msg="StartContainer for \"f7100f8424b4eb653061481503b13b23c4de017e8e4b097df4ca033b402e21d5\" returns successfully" Jul 7 00:02:23.813236 kubelet[2751]: I0707 00:02:23.813132 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-c8gpn" podStartSLOduration=1.4606682659999999 podStartE2EDuration="3.813114335s" podCreationTimestamp="2025-07-07 00:02:20 +0000 UTC" firstStartedPulling="2025-07-07 00:02:20.581761704 +0000 UTC m=+6.155189713" lastFinishedPulling="2025-07-07 00:02:22.934207772 +0000 UTC m=+8.507635782" observedRunningTime="2025-07-07 00:02:23.650548806 +0000 UTC m=+9.223976817" watchObservedRunningTime="2025-07-07 00:02:23.813114335 +0000 UTC m=+9.386542347" Jul 7 00:02:28.651392 sudo[1842]: pam_unix(sudo:session): session closed for user root Jul 7 00:02:28.669346 sshd[1839]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:28.671767 systemd[1]: sshd@6-139.178.70.105:22-139.178.68.195:60706.service: Deactivated successfully. Jul 7 00:02:28.672996 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:02:28.673516 systemd[1]: session-9.scope: Consumed 3.466s CPU time, 145.3M memory peak, 0B memory swap peak. Jul 7 00:02:28.675209 systemd-logind[1521]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:02:28.676982 systemd-logind[1521]: Removed session 9. Jul 7 00:02:30.611096 systemd[1]: Created slice kubepods-besteffort-podcf0b16da_2c13_4314_8927_a3042e0ca1cc.slice - libcontainer container kubepods-besteffort-podcf0b16da_2c13_4314_8927_a3042e0ca1cc.slice. Jul 7 00:02:30.615298 kubelet[2751]: I0707 00:02:30.615250 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vsr\" (UniqueName: \"kubernetes.io/projected/cf0b16da-2c13-4314-8927-a3042e0ca1cc-kube-api-access-64vsr\") pod \"calico-typha-547f4d5db4-l4rk7\" (UID: \"cf0b16da-2c13-4314-8927-a3042e0ca1cc\") " pod="calico-system/calico-typha-547f4d5db4-l4rk7" Jul 7 00:02:30.615298 kubelet[2751]: I0707 00:02:30.615275 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf0b16da-2c13-4314-8927-a3042e0ca1cc-tigera-ca-bundle\") pod \"calico-typha-547f4d5db4-l4rk7\" (UID: \"cf0b16da-2c13-4314-8927-a3042e0ca1cc\") " pod="calico-system/calico-typha-547f4d5db4-l4rk7" Jul 7 00:02:30.615298 kubelet[2751]: I0707 00:02:30.615287 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cf0b16da-2c13-4314-8927-a3042e0ca1cc-typha-certs\") pod \"calico-typha-547f4d5db4-l4rk7\" (UID: \"cf0b16da-2c13-4314-8927-a3042e0ca1cc\") " pod="calico-system/calico-typha-547f4d5db4-l4rk7" Jul 7 00:02:30.770989 systemd[1]: Created slice kubepods-besteffort-pod3f18bc13_139b_4a01_9d6e_f03b631fb0bc.slice - libcontainer container kubepods-besteffort-pod3f18bc13_139b_4a01_9d6e_f03b631fb0bc.slice. Jul 7 00:02:30.817339 kubelet[2751]: I0707 00:02:30.817302 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-cni-log-dir\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817339 kubelet[2751]: I0707 00:02:30.817346 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-flexvol-driver-host\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817505 kubelet[2751]: I0707 00:02:30.817361 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-tigera-ca-bundle\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817505 kubelet[2751]: I0707 00:02:30.817374 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-var-lib-calico\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817505 kubelet[2751]: I0707 00:02:30.817384 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-var-run-calico\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817505 kubelet[2751]: I0707 00:02:30.817393 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdjp\" (UniqueName: \"kubernetes.io/projected/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-kube-api-access-prdjp\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817505 kubelet[2751]: I0707 00:02:30.817410 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-cni-bin-dir\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817645 kubelet[2751]: I0707 00:02:30.817431 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-node-certs\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817645 kubelet[2751]: I0707 00:02:30.817446 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-xtables-lock\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817645 kubelet[2751]: I0707 00:02:30.817464 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-cni-net-dir\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817645 kubelet[2751]: I0707 00:02:30.817480 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-lib-modules\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.817645 kubelet[2751]: I0707 00:02:30.817500 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3f18bc13-139b-4a01-9d6e-f03b631fb0bc-policysync\") pod \"calico-node-ssn47\" (UID: \"3f18bc13-139b-4a01-9d6e-f03b631fb0bc\") " pod="calico-system/calico-node-ssn47" Jul 7 00:02:30.928565 containerd[1551]: time="2025-07-07T00:02:30.927656502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547f4d5db4-l4rk7,Uid:cf0b16da-2c13-4314-8927-a3042e0ca1cc,Namespace:calico-system,Attempt:0,}" Jul 7 00:02:30.929310 kubelet[2751]: E0707 00:02:30.928439 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:30.929310 kubelet[2751]: W0707 00:02:30.928460 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:30.929310 kubelet[2751]: E0707 00:02:30.928482 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:30.933432 kubelet[2751]: E0707 00:02:30.933404 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:30.933432 kubelet[2751]: W0707 00:02:30.933425 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:30.933533 kubelet[2751]: E0707 00:02:30.933443 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:30.949488 containerd[1551]: time="2025-07-07T00:02:30.949413841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:30.950013 containerd[1551]: time="2025-07-07T00:02:30.949955005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:30.950054 containerd[1551]: time="2025-07-07T00:02:30.950020287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:30.950180 containerd[1551]: time="2025-07-07T00:02:30.950152062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:30.991321 systemd[1]: Started cri-containerd-47c3fa127dd05b11ab71d350f15daf0ca10b11bff449decfb84f4295ff6f6236.scope - libcontainer container 47c3fa127dd05b11ab71d350f15daf0ca10b11bff449decfb84f4295ff6f6236. Jul 7 00:02:31.030154 containerd[1551]: time="2025-07-07T00:02:31.029984349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547f4d5db4-l4rk7,Uid:cf0b16da-2c13-4314-8927-a3042e0ca1cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"47c3fa127dd05b11ab71d350f15daf0ca10b11bff449decfb84f4295ff6f6236\"" Jul 7 00:02:31.074719 containerd[1551]: time="2025-07-07T00:02:31.074674940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ssn47,Uid:3f18bc13-139b-4a01-9d6e-f03b631fb0bc,Namespace:calico-system,Attempt:0,}" Jul 7 00:02:31.099646 containerd[1551]: time="2025-07-07T00:02:31.099477535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:02:31.119208 kubelet[2751]: E0707 00:02:31.119167 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26jsb" podUID="7ea19b78-cbc5-4bff-999a-89047d422683" Jul 7 00:02:31.151059 containerd[1551]: time="2025-07-07T00:02:31.150864008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:31.151059 containerd[1551]: time="2025-07-07T00:02:31.150997712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:31.151059 containerd[1551]: time="2025-07-07T00:02:31.151019891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:31.151432 containerd[1551]: time="2025-07-07T00:02:31.151402502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:31.170260 systemd[1]: Started cri-containerd-8eda3d1e2069eed68a17c181e93c9256dac6c883da6859bc6a2aae135c82a494.scope - libcontainer container 8eda3d1e2069eed68a17c181e93c9256dac6c883da6859bc6a2aae135c82a494. Jul 7 00:02:31.210542 containerd[1551]: time="2025-07-07T00:02:31.210449902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ssn47,Uid:3f18bc13-139b-4a01-9d6e-f03b631fb0bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"8eda3d1e2069eed68a17c181e93c9256dac6c883da6859bc6a2aae135c82a494\"" Jul 7 00:02:31.212394 kubelet[2751]: E0707 00:02:31.212306 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.212394 kubelet[2751]: W0707 00:02:31.212322 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.212394 kubelet[2751]: E0707 00:02:31.212337 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.212705 kubelet[2751]: E0707 00:02:31.212623 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.212705 kubelet[2751]: W0707 00:02:31.212632 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.212705 kubelet[2751]: E0707 00:02:31.212642 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.212953 kubelet[2751]: E0707 00:02:31.212844 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.212953 kubelet[2751]: W0707 00:02:31.212854 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.212953 kubelet[2751]: E0707 00:02:31.212868 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.213811 kubelet[2751]: E0707 00:02:31.213253 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.213811 kubelet[2751]: W0707 00:02:31.213259 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.213811 kubelet[2751]: E0707 00:02:31.213267 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.213811 kubelet[2751]: E0707 00:02:31.213588 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.213811 kubelet[2751]: W0707 00:02:31.213594 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.213811 kubelet[2751]: E0707 00:02:31.213601 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.215308 kubelet[2751]: E0707 00:02:31.215196 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.215308 kubelet[2751]: W0707 00:02:31.215210 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.215308 kubelet[2751]: E0707 00:02:31.215224 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.215634 kubelet[2751]: E0707 00:02:31.215487 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.215634 kubelet[2751]: W0707 00:02:31.215494 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.215634 kubelet[2751]: E0707 00:02:31.215502 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.215759 kubelet[2751]: E0707 00:02:31.215752 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.215799 kubelet[2751]: W0707 00:02:31.215759 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.215799 kubelet[2751]: E0707 00:02:31.215765 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.215912 kubelet[2751]: E0707 00:02:31.215906 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.215912 kubelet[2751]: W0707 00:02:31.215911 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.215963 kubelet[2751]: E0707 00:02:31.215918 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.216076 kubelet[2751]: E0707 00:02:31.216045 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.216076 kubelet[2751]: W0707 00:02:31.216053 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.216076 kubelet[2751]: E0707 00:02:31.216061 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.216274 kubelet[2751]: E0707 00:02:31.216259 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.216274 kubelet[2751]: W0707 00:02:31.216271 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.216349 kubelet[2751]: E0707 00:02:31.216282 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.216420 kubelet[2751]: E0707 00:02:31.216408 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.216465 kubelet[2751]: W0707 00:02:31.216420 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.216465 kubelet[2751]: E0707 00:02:31.216431 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.216575 kubelet[2751]: E0707 00:02:31.216562 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.216575 kubelet[2751]: W0707 00:02:31.216572 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.216687 kubelet[2751]: E0707 00:02:31.216581 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.217178 kubelet[2751]: E0707 00:02:31.217167 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.217178 kubelet[2751]: W0707 00:02:31.217177 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.217237 kubelet[2751]: E0707 00:02:31.217184 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.217307 kubelet[2751]: E0707 00:02:31.217297 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.217307 kubelet[2751]: W0707 00:02:31.217305 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.217352 kubelet[2751]: E0707 00:02:31.217330 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.217448 kubelet[2751]: E0707 00:02:31.217439 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.217448 kubelet[2751]: W0707 00:02:31.217447 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.217491 kubelet[2751]: E0707 00:02:31.217453 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.217568 kubelet[2751]: E0707 00:02:31.217557 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.217568 kubelet[2751]: W0707 00:02:31.217564 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.217568 kubelet[2751]: E0707 00:02:31.217569 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.217689 kubelet[2751]: E0707 00:02:31.217668 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.217689 kubelet[2751]: W0707 00:02:31.217675 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.217689 kubelet[2751]: E0707 00:02:31.217683 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.218289 kubelet[2751]: E0707 00:02:31.217803 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.218289 kubelet[2751]: W0707 00:02:31.217809 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.218289 kubelet[2751]: E0707 00:02:31.217817 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.218289 kubelet[2751]: E0707 00:02:31.218232 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.218289 kubelet[2751]: W0707 00:02:31.218239 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.218289 kubelet[2751]: E0707 00:02:31.218246 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.220646 kubelet[2751]: E0707 00:02:31.220399 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.220646 kubelet[2751]: W0707 00:02:31.220428 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.220646 kubelet[2751]: E0707 00:02:31.220450 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.220646 kubelet[2751]: I0707 00:02:31.220475 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7ea19b78-cbc5-4bff-999a-89047d422683-socket-dir\") pod \"csi-node-driver-26jsb\" (UID: \"7ea19b78-cbc5-4bff-999a-89047d422683\") " pod="calico-system/csi-node-driver-26jsb" Jul 7 00:02:31.221140 kubelet[2751]: E0707 00:02:31.221020 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.221140 kubelet[2751]: W0707 00:02:31.221031 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.221140 kubelet[2751]: E0707 00:02:31.221042 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.221140 kubelet[2751]: I0707 00:02:31.221065 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrkx\" (UniqueName: \"kubernetes.io/projected/7ea19b78-cbc5-4bff-999a-89047d422683-kube-api-access-ncrkx\") pod \"csi-node-driver-26jsb\" (UID: \"7ea19b78-cbc5-4bff-999a-89047d422683\") " pod="calico-system/csi-node-driver-26jsb" Jul 7 00:02:31.221447 kubelet[2751]: E0707 00:02:31.221394 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.221447 kubelet[2751]: W0707 00:02:31.221403 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.221447 kubelet[2751]: E0707 00:02:31.221411 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.221447 kubelet[2751]: I0707 00:02:31.221426 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7ea19b78-cbc5-4bff-999a-89047d422683-registration-dir\") pod \"csi-node-driver-26jsb\" (UID: \"7ea19b78-cbc5-4bff-999a-89047d422683\") " pod="calico-system/csi-node-driver-26jsb" Jul 7 00:02:31.221683 kubelet[2751]: E0707 00:02:31.221636 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.221683 kubelet[2751]: W0707 00:02:31.221649 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.221683 kubelet[2751]: E0707 00:02:31.221661 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.222292 kubelet[2751]: E0707 00:02:31.222276 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.222292 kubelet[2751]: W0707 00:02:31.222289 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.222364 kubelet[2751]: E0707 00:02:31.222300 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.222467 kubelet[2751]: E0707 00:02:31.222456 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.222467 kubelet[2751]: W0707 00:02:31.222466 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.222522 kubelet[2751]: E0707 00:02:31.222474 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.222612 kubelet[2751]: E0707 00:02:31.222603 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.222612 kubelet[2751]: W0707 00:02:31.222611 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.222653 kubelet[2751]: E0707 00:02:31.222619 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.222773 kubelet[2751]: E0707 00:02:31.222752 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.222773 kubelet[2751]: W0707 00:02:31.222770 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.222820 kubelet[2751]: E0707 00:02:31.222777 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.222897 kubelet[2751]: I0707 00:02:31.222863 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ea19b78-cbc5-4bff-999a-89047d422683-kubelet-dir\") pod \"csi-node-driver-26jsb\" (UID: \"7ea19b78-cbc5-4bff-999a-89047d422683\") " pod="calico-system/csi-node-driver-26jsb" Jul 7 00:02:31.223237 kubelet[2751]: E0707 00:02:31.223221 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.223237 kubelet[2751]: W0707 00:02:31.223234 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.223320 kubelet[2751]: E0707 00:02:31.223244 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.223383 kubelet[2751]: E0707 00:02:31.223361 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.223383 kubelet[2751]: W0707 00:02:31.223366 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.223383 kubelet[2751]: E0707 00:02:31.223371 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.223554 kubelet[2751]: E0707 00:02:31.223497 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.223554 kubelet[2751]: W0707 00:02:31.223504 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.223554 kubelet[2751]: E0707 00:02:31.223512 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.223884 kubelet[2751]: I0707 00:02:31.223863 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7ea19b78-cbc5-4bff-999a-89047d422683-varrun\") pod \"csi-node-driver-26jsb\" (UID: \"7ea19b78-cbc5-4bff-999a-89047d422683\") " pod="calico-system/csi-node-driver-26jsb" Jul 7 00:02:31.224083 kubelet[2751]: E0707 00:02:31.224067 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.224083 kubelet[2751]: W0707 00:02:31.224078 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.224306 kubelet[2751]: E0707 00:02:31.224086 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.225733 kubelet[2751]: E0707 00:02:31.225703 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.225733 kubelet[2751]: W0707 00:02:31.225726 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.225933 kubelet[2751]: E0707 00:02:31.225747 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.226100 kubelet[2751]: E0707 00:02:31.226083 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.226167 kubelet[2751]: W0707 00:02:31.226101 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.226167 kubelet[2751]: E0707 00:02:31.226112 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.226307 kubelet[2751]: E0707 00:02:31.226281 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.226307 kubelet[2751]: W0707 00:02:31.226289 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.226307 kubelet[2751]: E0707 00:02:31.226297 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.324825 kubelet[2751]: E0707 00:02:31.324745 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.324825 kubelet[2751]: W0707 00:02:31.324764 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.324825 kubelet[2751]: E0707 00:02:31.324778 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.325714 kubelet[2751]: E0707 00:02:31.325567 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.325714 kubelet[2751]: W0707 00:02:31.325577 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.325714 kubelet[2751]: E0707 00:02:31.325586 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.326178 kubelet[2751]: E0707 00:02:31.325870 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.326178 kubelet[2751]: W0707 00:02:31.325878 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.326178 kubelet[2751]: E0707 00:02:31.325887 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.327002 kubelet[2751]: E0707 00:02:31.326912 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.327002 kubelet[2751]: W0707 00:02:31.326924 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.327002 kubelet[2751]: E0707 00:02:31.326934 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.327615 kubelet[2751]: E0707 00:02:31.327442 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.327615 kubelet[2751]: W0707 00:02:31.327450 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.327615 kubelet[2751]: E0707 00:02:31.327459 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.327944 kubelet[2751]: E0707 00:02:31.327937 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.327998 kubelet[2751]: W0707 00:02:31.327983 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.328091 kubelet[2751]: E0707 00:02:31.328082 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.328681 kubelet[2751]: E0707 00:02:31.328666 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.328681 kubelet[2751]: W0707 00:02:31.328678 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.328764 kubelet[2751]: E0707 00:02:31.328689 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.330892 kubelet[2751]: E0707 00:02:31.328981 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.330892 kubelet[2751]: W0707 00:02:31.328989 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.330892 kubelet[2751]: E0707 00:02:31.328997 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.330892 kubelet[2751]: E0707 00:02:31.329183 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.330892 kubelet[2751]: W0707 00:02:31.329188 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.330892 kubelet[2751]: E0707 00:02:31.329194 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.330892 kubelet[2751]: E0707 00:02:31.329339 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.330892 kubelet[2751]: W0707 00:02:31.329344 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.330892 kubelet[2751]: E0707 00:02:31.329350 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.330892 kubelet[2751]: E0707 00:02:31.329470 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331265 kubelet[2751]: W0707 00:02:31.329477 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331265 kubelet[2751]: E0707 00:02:31.329483 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331265 kubelet[2751]: E0707 00:02:31.329583 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331265 kubelet[2751]: W0707 00:02:31.329589 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331265 kubelet[2751]: E0707 00:02:31.329594 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331265 kubelet[2751]: E0707 00:02:31.329700 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331265 kubelet[2751]: W0707 00:02:31.329707 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331265 kubelet[2751]: E0707 00:02:31.329712 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331265 kubelet[2751]: E0707 00:02:31.329834 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331265 kubelet[2751]: W0707 00:02:31.329841 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331511 kubelet[2751]: E0707 00:02:31.329850 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331511 kubelet[2751]: E0707 00:02:31.329974 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331511 kubelet[2751]: W0707 00:02:31.329981 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331511 kubelet[2751]: E0707 00:02:31.329989 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331511 kubelet[2751]: E0707 00:02:31.330085 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331511 kubelet[2751]: W0707 00:02:31.330092 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331511 kubelet[2751]: E0707 00:02:31.330100 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331511 kubelet[2751]: E0707 00:02:31.330218 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331511 kubelet[2751]: W0707 00:02:31.330223 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331511 kubelet[2751]: E0707 00:02:31.330229 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331748 kubelet[2751]: E0707 00:02:31.330350 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331748 kubelet[2751]: W0707 00:02:31.330356 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331748 kubelet[2751]: E0707 00:02:31.330364 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331748 kubelet[2751]: E0707 00:02:31.330471 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331748 kubelet[2751]: W0707 00:02:31.330478 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331748 kubelet[2751]: E0707 00:02:31.330487 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331748 kubelet[2751]: E0707 00:02:31.330571 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331748 kubelet[2751]: W0707 00:02:31.330576 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331748 kubelet[2751]: E0707 00:02:31.330581 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331748 kubelet[2751]: E0707 00:02:31.330667 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331997 kubelet[2751]: W0707 00:02:31.330672 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331997 kubelet[2751]: E0707 00:02:31.330676 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.331997 kubelet[2751]: E0707 00:02:31.331932 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.331997 kubelet[2751]: W0707 00:02:31.331941 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.331997 kubelet[2751]: E0707 00:02:31.331953 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.332280 kubelet[2751]: E0707 00:02:31.332112 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.332280 kubelet[2751]: W0707 00:02:31.332131 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.332280 kubelet[2751]: E0707 00:02:31.332140 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.332280 kubelet[2751]: E0707 00:02:31.332252 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.332280 kubelet[2751]: W0707 00:02:31.332257 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.332280 kubelet[2751]: E0707 00:02:31.332262 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.332416 kubelet[2751]: E0707 00:02:31.332386 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.332416 kubelet[2751]: W0707 00:02:31.332393 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.332416 kubelet[2751]: E0707 00:02:31.332402 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:31.339400 kubelet[2751]: E0707 00:02:31.339380 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:31.339400 kubelet[2751]: W0707 00:02:31.339398 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:31.339506 kubelet[2751]: E0707 00:02:31.339414 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:32.597688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106404716.mount: Deactivated successfully. Jul 7 00:02:32.679213 kubelet[2751]: E0707 00:02:32.679183 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26jsb" podUID="7ea19b78-cbc5-4bff-999a-89047d422683" Jul 7 00:02:33.145654 containerd[1551]: time="2025-07-07T00:02:33.145614642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:33.146243 containerd[1551]: time="2025-07-07T00:02:33.146113410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:02:33.146739 containerd[1551]: time="2025-07-07T00:02:33.146718482Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:33.148547 containerd[1551]: time="2025-07-07T00:02:33.148528661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:33.149194 containerd[1551]: time="2025-07-07T00:02:33.149089078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.049581512s" Jul 7 00:02:33.149194 containerd[1551]: time="2025-07-07T00:02:33.149114213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:02:33.150147 containerd[1551]: time="2025-07-07T00:02:33.150030993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:02:33.167258 containerd[1551]: time="2025-07-07T00:02:33.167095935Z" level=info msg="CreateContainer within sandbox \"47c3fa127dd05b11ab71d350f15daf0ca10b11bff449decfb84f4295ff6f6236\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:02:33.199815 containerd[1551]: time="2025-07-07T00:02:33.199728017Z" level=info msg="CreateContainer within sandbox \"47c3fa127dd05b11ab71d350f15daf0ca10b11bff449decfb84f4295ff6f6236\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"78ab6ae587ece57225578f103d564862e062f3e1b6e20ad7706d4ec3d69ffa35\"" Jul 7 00:02:33.217169 containerd[1551]: time="2025-07-07T00:02:33.217140721Z" level=info msg="StartContainer for \"78ab6ae587ece57225578f103d564862e062f3e1b6e20ad7706d4ec3d69ffa35\"" Jul 7 00:02:33.278290 systemd[1]: Started cri-containerd-78ab6ae587ece57225578f103d564862e062f3e1b6e20ad7706d4ec3d69ffa35.scope - libcontainer container 78ab6ae587ece57225578f103d564862e062f3e1b6e20ad7706d4ec3d69ffa35. Jul 7 00:02:33.338537 containerd[1551]: time="2025-07-07T00:02:33.338440566Z" level=info msg="StartContainer for \"78ab6ae587ece57225578f103d564862e062f3e1b6e20ad7706d4ec3d69ffa35\" returns successfully" Jul 7 00:02:33.759172 kubelet[2751]: E0707 00:02:33.759142 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.759172 kubelet[2751]: W0707 00:02:33.759164 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.759543 kubelet[2751]: E0707 00:02:33.759186 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.759543 kubelet[2751]: E0707 00:02:33.759326 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.759543 kubelet[2751]: W0707 00:02:33.759333 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.759543 kubelet[2751]: E0707 00:02:33.759340 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.759543 kubelet[2751]: E0707 00:02:33.759449 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.759543 kubelet[2751]: W0707 00:02:33.759455 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.759543 kubelet[2751]: E0707 00:02:33.759462 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.773281 kubelet[2751]: E0707 00:02:33.759603 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.773281 kubelet[2751]: W0707 00:02:33.759609 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.773281 kubelet[2751]: E0707 00:02:33.759616 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.773281 kubelet[2751]: E0707 00:02:33.759731 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.773281 kubelet[2751]: W0707 00:02:33.759737 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.773281 kubelet[2751]: E0707 00:02:33.759743 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.773281 kubelet[2751]: E0707 00:02:33.759874 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.773281 kubelet[2751]: W0707 00:02:33.759880 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.773281 kubelet[2751]: E0707 00:02:33.759887 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.773281 kubelet[2751]: E0707 00:02:33.760011 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.776315 kubelet[2751]: W0707 00:02:33.760017 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.776315 kubelet[2751]: E0707 00:02:33.760023 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.776315 kubelet[2751]: E0707 00:02:33.760156 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.776315 kubelet[2751]: W0707 00:02:33.760162 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.776315 kubelet[2751]: E0707 00:02:33.760169 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.776315 kubelet[2751]: E0707 00:02:33.760301 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.776315 kubelet[2751]: W0707 00:02:33.760307 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.776315 kubelet[2751]: E0707 00:02:33.760313 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.776315 kubelet[2751]: E0707 00:02:33.760430 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.776315 kubelet[2751]: W0707 00:02:33.760438 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.776591 kubelet[2751]: E0707 00:02:33.760444 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.776591 kubelet[2751]: E0707 00:02:33.760567 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.776591 kubelet[2751]: W0707 00:02:33.760573 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.776591 kubelet[2751]: E0707 00:02:33.760580 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.776591 kubelet[2751]: E0707 00:02:33.760698 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.776591 kubelet[2751]: W0707 00:02:33.760704 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.776591 kubelet[2751]: E0707 00:02:33.760711 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.776591 kubelet[2751]: E0707 00:02:33.760832 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.776591 kubelet[2751]: W0707 00:02:33.760838 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.776591 kubelet[2751]: E0707 00:02:33.760844 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.785074 kubelet[2751]: E0707 00:02:33.760970 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.785074 kubelet[2751]: W0707 00:02:33.760976 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.785074 kubelet[2751]: E0707 00:02:33.760982 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.785074 kubelet[2751]: E0707 00:02:33.761097 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.785074 kubelet[2751]: W0707 00:02:33.761102 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.785074 kubelet[2751]: E0707 00:02:33.761108 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.838760 kubelet[2751]: I0707 00:02:33.833192 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-547f4d5db4-l4rk7" podStartSLOduration=1.7747392990000002 podStartE2EDuration="3.825308581s" podCreationTimestamp="2025-07-07 00:02:30 +0000 UTC" firstStartedPulling="2025-07-07 00:02:31.099242532 +0000 UTC m=+16.672670542" lastFinishedPulling="2025-07-07 00:02:33.149811808 +0000 UTC m=+18.723239824" observedRunningTime="2025-07-07 00:02:33.82417926 +0000 UTC m=+19.397607285" watchObservedRunningTime="2025-07-07 00:02:33.825308581 +0000 UTC m=+19.398736608" Jul 7 00:02:33.850688 kubelet[2751]: E0707 00:02:33.850665 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.850930 kubelet[2751]: W0707 00:02:33.850791 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.850930 kubelet[2751]: E0707 00:02:33.850815 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.851110 kubelet[2751]: E0707 00:02:33.851100 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.851247 kubelet[2751]: W0707 00:02:33.851201 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.851247 kubelet[2751]: E0707 00:02:33.851216 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.851426 kubelet[2751]: E0707 00:02:33.851403 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.851426 kubelet[2751]: W0707 00:02:33.851420 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.851512 kubelet[2751]: E0707 00:02:33.851435 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.851608 kubelet[2751]: E0707 00:02:33.851591 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.851608 kubelet[2751]: W0707 00:02:33.851603 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.851608 kubelet[2751]: E0707 00:02:33.851611 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.851769 kubelet[2751]: E0707 00:02:33.851734 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.851769 kubelet[2751]: W0707 00:02:33.851740 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.851769 kubelet[2751]: E0707 00:02:33.851752 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.851935 kubelet[2751]: E0707 00:02:33.851919 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.851935 kubelet[2751]: W0707 00:02:33.851933 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.852021 kubelet[2751]: E0707 00:02:33.851943 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.852341 kubelet[2751]: E0707 00:02:33.852265 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.852341 kubelet[2751]: W0707 00:02:33.852278 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.852341 kubelet[2751]: E0707 00:02:33.852291 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.852523 kubelet[2751]: E0707 00:02:33.852444 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.852523 kubelet[2751]: W0707 00:02:33.852452 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.852523 kubelet[2751]: E0707 00:02:33.852460 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.852680 kubelet[2751]: E0707 00:02:33.852599 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.852680 kubelet[2751]: W0707 00:02:33.852608 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.852680 kubelet[2751]: E0707 00:02:33.852619 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.852787 kubelet[2751]: E0707 00:02:33.852767 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.852787 kubelet[2751]: W0707 00:02:33.852774 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.852787 kubelet[2751]: E0707 00:02:33.852783 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.852973 kubelet[2751]: E0707 00:02:33.852936 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.852973 kubelet[2751]: W0707 00:02:33.852942 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.852973 kubelet[2751]: E0707 00:02:33.852949 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.853217 kubelet[2751]: E0707 00:02:33.853197 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.853217 kubelet[2751]: W0707 00:02:33.853212 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.853308 kubelet[2751]: E0707 00:02:33.853222 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.853405 kubelet[2751]: E0707 00:02:33.853389 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.853405 kubelet[2751]: W0707 00:02:33.853400 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.853405 kubelet[2751]: E0707 00:02:33.853407 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.853567 kubelet[2751]: E0707 00:02:33.853553 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.853567 kubelet[2751]: W0707 00:02:33.853563 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.853655 kubelet[2751]: E0707 00:02:33.853573 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.853738 kubelet[2751]: E0707 00:02:33.853720 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.853738 kubelet[2751]: W0707 00:02:33.853732 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.853738 kubelet[2751]: E0707 00:02:33.853739 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.853924 kubelet[2751]: E0707 00:02:33.853903 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.853924 kubelet[2751]: W0707 00:02:33.853916 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.853994 kubelet[2751]: E0707 00:02:33.853927 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.854253 kubelet[2751]: E0707 00:02:33.854232 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.854253 kubelet[2751]: W0707 00:02:33.854248 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.854334 kubelet[2751]: E0707 00:02:33.854259 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:33.854425 kubelet[2751]: E0707 00:02:33.854411 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:33.854425 kubelet[2751]: W0707 00:02:33.854422 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:33.854494 kubelet[2751]: E0707 00:02:33.854432 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.613133 kubelet[2751]: E0707 00:02:34.613077 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26jsb" podUID="7ea19b78-cbc5-4bff-999a-89047d422683" Jul 7 00:02:34.679945 kubelet[2751]: I0707 00:02:34.679444 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:02:34.715354 containerd[1551]: time="2025-07-07T00:02:34.715287325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:34.715878 containerd[1551]: time="2025-07-07T00:02:34.715829904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:02:34.716187 containerd[1551]: time="2025-07-07T00:02:34.716165113Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:34.717601 containerd[1551]: time="2025-07-07T00:02:34.717577339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:34.718456 containerd[1551]: time="2025-07-07T00:02:34.718208822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.568162152s" Jul 7 00:02:34.718456 containerd[1551]: time="2025-07-07T00:02:34.718235721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:02:34.721368 containerd[1551]: time="2025-07-07T00:02:34.721265282Z" level=info msg="CreateContainer within sandbox \"8eda3d1e2069eed68a17c181e93c9256dac6c883da6859bc6a2aae135c82a494\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:02:34.732024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3245686769.mount: Deactivated successfully. Jul 7 00:02:34.739275 containerd[1551]: time="2025-07-07T00:02:34.739247904Z" level=info msg="CreateContainer within sandbox \"8eda3d1e2069eed68a17c181e93c9256dac6c883da6859bc6a2aae135c82a494\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c271c6b194637bd117d4a6359338408b9d53dec2d15c0f216746c58b9baa16c5\"" Jul 7 00:02:34.740517 containerd[1551]: time="2025-07-07T00:02:34.740468392Z" level=info msg="StartContainer for \"c271c6b194637bd117d4a6359338408b9d53dec2d15c0f216746c58b9baa16c5\"" Jul 7 00:02:34.767687 kubelet[2751]: E0707 00:02:34.767666 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.767687 kubelet[2751]: W0707 00:02:34.767680 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.770396 kubelet[2751]: E0707 00:02:34.767693 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.770396 kubelet[2751]: E0707 00:02:34.767801 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.770396 kubelet[2751]: W0707 00:02:34.767808 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.770396 kubelet[2751]: E0707 00:02:34.767836 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.770396 kubelet[2751]: E0707 00:02:34.767956 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.770396 kubelet[2751]: W0707 00:02:34.767960 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.770396 kubelet[2751]: E0707 00:02:34.767966 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.770396 kubelet[2751]: E0707 00:02:34.768068 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.770396 kubelet[2751]: W0707 00:02:34.768073 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.770396 kubelet[2751]: E0707 00:02:34.768079 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.770320 systemd[1]: Started cri-containerd-c271c6b194637bd117d4a6359338408b9d53dec2d15c0f216746c58b9baa16c5.scope - libcontainer container c271c6b194637bd117d4a6359338408b9d53dec2d15c0f216746c58b9baa16c5. Jul 7 00:02:34.771139 kubelet[2751]: E0707 00:02:34.768183 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.771139 kubelet[2751]: W0707 00:02:34.768188 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.771139 kubelet[2751]: E0707 00:02:34.768193 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.771139 kubelet[2751]: E0707 00:02:34.768298 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.771139 kubelet[2751]: W0707 00:02:34.768302 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.771139 kubelet[2751]: E0707 00:02:34.768307 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.771139 kubelet[2751]: E0707 00:02:34.768414 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.771139 kubelet[2751]: W0707 00:02:34.768419 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.771139 kubelet[2751]: E0707 00:02:34.768423 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.771139 kubelet[2751]: E0707 00:02:34.768520 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.771862 kubelet[2751]: W0707 00:02:34.768525 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.771862 kubelet[2751]: E0707 00:02:34.768529 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.771862 kubelet[2751]: E0707 00:02:34.768635 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.771862 kubelet[2751]: W0707 00:02:34.768640 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.771862 kubelet[2751]: E0707 00:02:34.768646 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.771862 kubelet[2751]: E0707 00:02:34.768751 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.771862 kubelet[2751]: W0707 00:02:34.768755 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.771862 kubelet[2751]: E0707 00:02:34.768760 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.771862 kubelet[2751]: E0707 00:02:34.768852 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.771862 kubelet[2751]: W0707 00:02:34.768857 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.772112 kubelet[2751]: E0707 00:02:34.768862 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.772112 kubelet[2751]: E0707 00:02:34.769168 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.772112 kubelet[2751]: W0707 00:02:34.769173 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.772112 kubelet[2751]: E0707 00:02:34.769189 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.772112 kubelet[2751]: E0707 00:02:34.769319 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.772112 kubelet[2751]: W0707 00:02:34.769325 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.772112 kubelet[2751]: E0707 00:02:34.769330 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.772112 kubelet[2751]: E0707 00:02:34.769446 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.772112 kubelet[2751]: W0707 00:02:34.769450 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.772112 kubelet[2751]: E0707 00:02:34.769455 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.772382 kubelet[2751]: E0707 00:02:34.769551 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:02:34.772382 kubelet[2751]: W0707 00:02:34.769556 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:02:34.772382 kubelet[2751]: E0707 00:02:34.769561 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:02:34.796247 containerd[1551]: time="2025-07-07T00:02:34.796205873Z" level=info msg="StartContainer for \"c271c6b194637bd117d4a6359338408b9d53dec2d15c0f216746c58b9baa16c5\" returns successfully" Jul 7 00:02:34.800463 systemd[1]: cri-containerd-c271c6b194637bd117d4a6359338408b9d53dec2d15c0f216746c58b9baa16c5.scope: Deactivated successfully. Jul 7 00:02:35.103387 containerd[1551]: time="2025-07-07T00:02:35.082472922Z" level=info msg="shim disconnected" id=c271c6b194637bd117d4a6359338408b9d53dec2d15c0f216746c58b9baa16c5 namespace=k8s.io Jul 7 00:02:35.103573 containerd[1551]: time="2025-07-07T00:02:35.103389870Z" level=warning msg="cleaning up after shim disconnected" id=c271c6b194637bd117d4a6359338408b9d53dec2d15c0f216746c58b9baa16c5 namespace=k8s.io Jul 7 00:02:35.103573 containerd[1551]: time="2025-07-07T00:02:35.103402465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:35.112826 containerd[1551]: time="2025-07-07T00:02:35.112787800Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:02:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:02:35.161399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c271c6b194637bd117d4a6359338408b9d53dec2d15c0f216746c58b9baa16c5-rootfs.mount: Deactivated successfully. Jul 7 00:02:35.682628 containerd[1551]: time="2025-07-07T00:02:35.682475081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:02:36.615267 kubelet[2751]: E0707 00:02:36.615187 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26jsb" podUID="7ea19b78-cbc5-4bff-999a-89047d422683" Jul 7 00:02:38.613850 kubelet[2751]: E0707 00:02:38.613825 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26jsb" podUID="7ea19b78-cbc5-4bff-999a-89047d422683" Jul 7 00:02:39.601793 containerd[1551]: time="2025-07-07T00:02:39.601752660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:39.632445 containerd[1551]: time="2025-07-07T00:02:39.632399631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:02:39.658035 containerd[1551]: time="2025-07-07T00:02:39.657998840Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:39.686811 containerd[1551]: time="2025-07-07T00:02:39.686782598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:39.687382 containerd[1551]: time="2025-07-07T00:02:39.687210657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.004581117s" Jul 7 00:02:39.687382 containerd[1551]: time="2025-07-07T00:02:39.687234258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:02:39.711834 containerd[1551]: time="2025-07-07T00:02:39.711726069Z" level=info msg="CreateContainer within sandbox \"8eda3d1e2069eed68a17c181e93c9256dac6c883da6859bc6a2aae135c82a494\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:02:40.434040 containerd[1551]: time="2025-07-07T00:02:40.433915349Z" level=info msg="CreateContainer within sandbox \"8eda3d1e2069eed68a17c181e93c9256dac6c883da6859bc6a2aae135c82a494\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"46b37133a8375976931f062679ff696802bf69ca489a4663b5e02afc544ccc15\"" Jul 7 00:02:40.455157 containerd[1551]: time="2025-07-07T00:02:40.454257204Z" level=info msg="StartContainer for \"46b37133a8375976931f062679ff696802bf69ca489a4663b5e02afc544ccc15\"" Jul 7 00:02:40.494269 systemd[1]: Started cri-containerd-46b37133a8375976931f062679ff696802bf69ca489a4663b5e02afc544ccc15.scope - libcontainer container 46b37133a8375976931f062679ff696802bf69ca489a4663b5e02afc544ccc15. Jul 7 00:02:40.541153 containerd[1551]: time="2025-07-07T00:02:40.541066698Z" level=info msg="StartContainer for \"46b37133a8375976931f062679ff696802bf69ca489a4663b5e02afc544ccc15\" returns successfully" Jul 7 00:02:40.613834 kubelet[2751]: E0707 00:02:40.613801 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26jsb" podUID="7ea19b78-cbc5-4bff-999a-89047d422683" Jul 7 00:02:42.613083 kubelet[2751]: E0707 00:02:42.612689 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26jsb" podUID="7ea19b78-cbc5-4bff-999a-89047d422683" Jul 7 00:02:44.079104 systemd[1]: cri-containerd-46b37133a8375976931f062679ff696802bf69ca489a4663b5e02afc544ccc15.scope: Deactivated successfully. Jul 7 00:02:44.109186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46b37133a8375976931f062679ff696802bf69ca489a4663b5e02afc544ccc15-rootfs.mount: Deactivated successfully. Jul 7 00:02:44.141134 containerd[1551]: time="2025-07-07T00:02:44.141056187Z" level=info msg="shim disconnected" id=46b37133a8375976931f062679ff696802bf69ca489a4663b5e02afc544ccc15 namespace=k8s.io Jul 7 00:02:44.141134 containerd[1551]: time="2025-07-07T00:02:44.141102015Z" level=warning msg="cleaning up after shim disconnected" id=46b37133a8375976931f062679ff696802bf69ca489a4663b5e02afc544ccc15 namespace=k8s.io Jul 7 00:02:44.141134 containerd[1551]: time="2025-07-07T00:02:44.141125986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:02:44.170759 containerd[1551]: time="2025-07-07T00:02:44.170705876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:02:44.281616 kubelet[2751]: I0707 00:02:44.149824 2751 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:02:44.735867 systemd[1]: Created slice kubepods-besteffort-pod7ea19b78_cbc5_4bff_999a_89047d422683.slice - libcontainer container kubepods-besteffort-pod7ea19b78_cbc5_4bff_999a_89047d422683.slice. Jul 7 00:02:44.799681 kubelet[2751]: I0707 00:02:44.799660 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9t9t\" (UniqueName: \"kubernetes.io/projected/b3c62c35-57e3-46c3-83b5-1109b949cad4-kube-api-access-b9t9t\") pod \"coredns-674b8bbfcf-6n2s9\" (UID: \"b3c62c35-57e3-46c3-83b5-1109b949cad4\") " pod="kube-system/coredns-674b8bbfcf-6n2s9" Jul 7 00:02:44.800216 kubelet[2751]: I0707 00:02:44.800202 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3c62c35-57e3-46c3-83b5-1109b949cad4-config-volume\") pod \"coredns-674b8bbfcf-6n2s9\" (UID: \"b3c62c35-57e3-46c3-83b5-1109b949cad4\") " pod="kube-system/coredns-674b8bbfcf-6n2s9" Jul 7 00:02:44.800365 systemd[1]: Created slice kubepods-besteffort-pod4d25b825_a10e_406a_bd66_7d8888f8d3a4.slice - libcontainer container kubepods-besteffort-pod4d25b825_a10e_406a_bd66_7d8888f8d3a4.slice. Jul 7 00:02:44.801021 containerd[1551]: time="2025-07-07T00:02:44.800951862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26jsb,Uid:7ea19b78-cbc5-4bff-999a-89047d422683,Namespace:calico-system,Attempt:0,}" Jul 7 00:02:44.824199 systemd[1]: Created slice kubepods-besteffort-pod7ce3da94_1f58_487b_979e_8f10e33da61e.slice - libcontainer container kubepods-besteffort-pod7ce3da94_1f58_487b_979e_8f10e33da61e.slice. Jul 7 00:02:44.834373 systemd[1]: Created slice kubepods-burstable-pod7938ba32_3622_4550_916a_e1e1fa111816.slice - libcontainer container kubepods-burstable-pod7938ba32_3622_4550_916a_e1e1fa111816.slice. Jul 7 00:02:44.841765 systemd[1]: Created slice kubepods-besteffort-pod1df52e7b_a2e0_4431_9a2a_1ab12c493fd5.slice - libcontainer container kubepods-besteffort-pod1df52e7b_a2e0_4431_9a2a_1ab12c493fd5.slice. Jul 7 00:02:44.849363 systemd[1]: Created slice kubepods-besteffort-pod14bb0f88_ca72_41a6_bf6b_278ec258254c.slice - libcontainer container kubepods-besteffort-pod14bb0f88_ca72_41a6_bf6b_278ec258254c.slice. Jul 7 00:02:44.855381 systemd[1]: Created slice kubepods-besteffort-pod390c0a19_48f4_4797_8196_5ec15c21cefb.slice - libcontainer container kubepods-besteffort-pod390c0a19_48f4_4797_8196_5ec15c21cefb.slice. Jul 7 00:02:44.858485 systemd[1]: Created slice kubepods-burstable-podb3c62c35_57e3_46c3_83b5_1109b949cad4.slice - libcontainer container kubepods-burstable-podb3c62c35_57e3_46c3_83b5_1109b949cad4.slice. Jul 7 00:02:44.900980 kubelet[2751]: I0707 00:02:44.900934 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ce3da94-1f58-487b-979e-8f10e33da61e-whisker-backend-key-pair\") pod \"whisker-7d6c4dd986-p2dsm\" (UID: \"7ce3da94-1f58-487b-979e-8f10e33da61e\") " pod="calico-system/whisker-7d6c4dd986-p2dsm" Jul 7 00:02:44.900980 kubelet[2751]: I0707 00:02:44.900961 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ce3da94-1f58-487b-979e-8f10e33da61e-whisker-ca-bundle\") pod \"whisker-7d6c4dd986-p2dsm\" (UID: \"7ce3da94-1f58-487b-979e-8f10e33da61e\") " pod="calico-system/whisker-7d6c4dd986-p2dsm" Jul 7 00:02:44.902025 kubelet[2751]: I0707 00:02:44.901271 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkcc\" (UniqueName: \"kubernetes.io/projected/7ce3da94-1f58-487b-979e-8f10e33da61e-kube-api-access-tjkcc\") pod \"whisker-7d6c4dd986-p2dsm\" (UID: \"7ce3da94-1f58-487b-979e-8f10e33da61e\") " pod="calico-system/whisker-7d6c4dd986-p2dsm" Jul 7 00:02:44.902025 kubelet[2751]: I0707 00:02:44.901288 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bb0f88-ca72-41a6-bf6b-278ec258254c-config\") pod \"goldmane-768f4c5c69-m5kcn\" (UID: \"14bb0f88-ca72-41a6-bf6b-278ec258254c\") " pod="calico-system/goldmane-768f4c5c69-m5kcn" Jul 7 00:02:44.902025 kubelet[2751]: I0707 00:02:44.901300 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7938ba32-3622-4550-916a-e1e1fa111816-config-volume\") pod \"coredns-674b8bbfcf-ktt6k\" (UID: \"7938ba32-3622-4550-916a-e1e1fa111816\") " pod="kube-system/coredns-674b8bbfcf-ktt6k" Jul 7 00:02:44.902025 kubelet[2751]: I0707 00:02:44.901310 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzrl7\" (UniqueName: \"kubernetes.io/projected/7938ba32-3622-4550-916a-e1e1fa111816-kube-api-access-jzrl7\") pod \"coredns-674b8bbfcf-ktt6k\" (UID: \"7938ba32-3622-4550-916a-e1e1fa111816\") " pod="kube-system/coredns-674b8bbfcf-ktt6k" Jul 7 00:02:44.902025 kubelet[2751]: I0707 00:02:44.901320 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d25b825-a10e-406a-bd66-7d8888f8d3a4-tigera-ca-bundle\") pod \"calico-kube-controllers-6c697bf8b4-g6224\" (UID: \"4d25b825-a10e-406a-bd66-7d8888f8d3a4\") " pod="calico-system/calico-kube-controllers-6c697bf8b4-g6224" Jul 7 00:02:44.902187 kubelet[2751]: I0707 00:02:44.901509 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1df52e7b-a2e0-4431-9a2a-1ab12c493fd5-calico-apiserver-certs\") pod \"calico-apiserver-7cb89db9fc-b2bvr\" (UID: \"1df52e7b-a2e0-4431-9a2a-1ab12c493fd5\") " pod="calico-apiserver/calico-apiserver-7cb89db9fc-b2bvr" Jul 7 00:02:44.902187 kubelet[2751]: I0707 00:02:44.901531 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsh2m\" (UniqueName: \"kubernetes.io/projected/14bb0f88-ca72-41a6-bf6b-278ec258254c-kube-api-access-qsh2m\") pod \"goldmane-768f4c5c69-m5kcn\" (UID: \"14bb0f88-ca72-41a6-bf6b-278ec258254c\") " pod="calico-system/goldmane-768f4c5c69-m5kcn" Jul 7 00:02:44.902187 kubelet[2751]: I0707 00:02:44.901828 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14bb0f88-ca72-41a6-bf6b-278ec258254c-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-m5kcn\" (UID: \"14bb0f88-ca72-41a6-bf6b-278ec258254c\") " pod="calico-system/goldmane-768f4c5c69-m5kcn" Jul 7 00:02:44.903350 kubelet[2751]: I0707 00:02:44.903241 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g54x9\" (UniqueName: \"kubernetes.io/projected/4d25b825-a10e-406a-bd66-7d8888f8d3a4-kube-api-access-g54x9\") pod \"calico-kube-controllers-6c697bf8b4-g6224\" (UID: \"4d25b825-a10e-406a-bd66-7d8888f8d3a4\") " pod="calico-system/calico-kube-controllers-6c697bf8b4-g6224" Jul 7 00:02:44.903350 kubelet[2751]: I0707 00:02:44.903263 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9j6c\" (UniqueName: \"kubernetes.io/projected/1df52e7b-a2e0-4431-9a2a-1ab12c493fd5-kube-api-access-z9j6c\") pod \"calico-apiserver-7cb89db9fc-b2bvr\" (UID: \"1df52e7b-a2e0-4431-9a2a-1ab12c493fd5\") " pod="calico-apiserver/calico-apiserver-7cb89db9fc-b2bvr" Jul 7 00:02:44.903350 kubelet[2751]: I0707 00:02:44.903275 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/390c0a19-48f4-4797-8196-5ec15c21cefb-calico-apiserver-certs\") pod \"calico-apiserver-7cb89db9fc-bnjpd\" (UID: \"390c0a19-48f4-4797-8196-5ec15c21cefb\") " pod="calico-apiserver/calico-apiserver-7cb89db9fc-bnjpd" Jul 7 00:02:44.903350 kubelet[2751]: I0707 00:02:44.903306 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/14bb0f88-ca72-41a6-bf6b-278ec258254c-goldmane-key-pair\") pod \"goldmane-768f4c5c69-m5kcn\" (UID: \"14bb0f88-ca72-41a6-bf6b-278ec258254c\") " pod="calico-system/goldmane-768f4c5c69-m5kcn" Jul 7 00:02:44.903350 kubelet[2751]: I0707 00:02:44.903326 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72jv2\" (UniqueName: \"kubernetes.io/projected/390c0a19-48f4-4797-8196-5ec15c21cefb-kube-api-access-72jv2\") pod \"calico-apiserver-7cb89db9fc-bnjpd\" (UID: \"390c0a19-48f4-4797-8196-5ec15c21cefb\") " pod="calico-apiserver/calico-apiserver-7cb89db9fc-bnjpd" Jul 7 00:02:45.115006 containerd[1551]: time="2025-07-07T00:02:45.114960149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c697bf8b4-g6224,Uid:4d25b825-a10e-406a-bd66-7d8888f8d3a4,Namespace:calico-system,Attempt:0,}" Jul 7 00:02:45.128519 containerd[1551]: time="2025-07-07T00:02:45.128491690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d6c4dd986-p2dsm,Uid:7ce3da94-1f58-487b-979e-8f10e33da61e,Namespace:calico-system,Attempt:0,}" Jul 7 00:02:45.141262 containerd[1551]: time="2025-07-07T00:02:45.141239019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ktt6k,Uid:7938ba32-3622-4550-916a-e1e1fa111816,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:45.144746 containerd[1551]: time="2025-07-07T00:02:45.144614855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb89db9fc-b2bvr,Uid:1df52e7b-a2e0-4431-9a2a-1ab12c493fd5,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:02:45.154147 containerd[1551]: time="2025-07-07T00:02:45.154114516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m5kcn,Uid:14bb0f88-ca72-41a6-bf6b-278ec258254c,Namespace:calico-system,Attempt:0,}" Jul 7 00:02:45.157628 containerd[1551]: time="2025-07-07T00:02:45.157610475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb89db9fc-bnjpd,Uid:390c0a19-48f4-4797-8196-5ec15c21cefb,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:02:45.162098 containerd[1551]: time="2025-07-07T00:02:45.162075840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6n2s9,Uid:b3c62c35-57e3-46c3-83b5-1109b949cad4,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:45.402095 containerd[1551]: time="2025-07-07T00:02:45.401768591Z" level=error msg="Failed to destroy network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.404539 containerd[1551]: time="2025-07-07T00:02:45.404497680Z" level=error msg="encountered an error cleaning up failed sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.404632 containerd[1551]: time="2025-07-07T00:02:45.404618494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d6c4dd986-p2dsm,Uid:7ce3da94-1f58-487b-979e-8f10e33da61e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.404935 containerd[1551]: time="2025-07-07T00:02:45.404920709Z" level=error msg="Failed to destroy network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.405165 containerd[1551]: time="2025-07-07T00:02:45.405139341Z" level=error msg="Failed to destroy network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.405250 containerd[1551]: time="2025-07-07T00:02:45.405236822Z" level=error msg="encountered an error cleaning up failed sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.405304 containerd[1551]: time="2025-07-07T00:02:45.405292656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6n2s9,Uid:b3c62c35-57e3-46c3-83b5-1109b949cad4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.405560 containerd[1551]: time="2025-07-07T00:02:45.405499114Z" level=error msg="Failed to destroy network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.405778 containerd[1551]: time="2025-07-07T00:02:45.405694333Z" level=error msg="encountered an error cleaning up failed sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.405778 containerd[1551]: time="2025-07-07T00:02:45.405715087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m5kcn,Uid:14bb0f88-ca72-41a6-bf6b-278ec258254c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.405778 containerd[1551]: time="2025-07-07T00:02:45.404933414Z" level=error msg="Failed to destroy network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.405778 containerd[1551]: time="2025-07-07T00:02:45.405767674Z" level=error msg="Failed to destroy network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.406050 containerd[1551]: time="2025-07-07T00:02:45.405992648Z" level=error msg="encountered an error cleaning up failed sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.406050 containerd[1551]: time="2025-07-07T00:02:45.406012634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb89db9fc-b2bvr,Uid:1df52e7b-a2e0-4431-9a2a-1ab12c493fd5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.406050 containerd[1551]: time="2025-07-07T00:02:45.405320398Z" level=error msg="encountered an error cleaning up failed sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.406050 containerd[1551]: time="2025-07-07T00:02:45.404947628Z" level=error msg="Failed to destroy network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.406278 containerd[1551]: time="2025-07-07T00:02:45.406015677Z" level=error msg="encountered an error cleaning up failed sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.406278 containerd[1551]: time="2025-07-07T00:02:45.406229076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26jsb,Uid:7ea19b78-cbc5-4bff-999a-89047d422683,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.406278 containerd[1551]: time="2025-07-07T00:02:45.406042359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ktt6k,Uid:7938ba32-3622-4550-916a-e1e1fa111816,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.406509 containerd[1551]: time="2025-07-07T00:02:45.406443952Z" level=error msg="encountered an error cleaning up failed sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.406509 containerd[1551]: time="2025-07-07T00:02:45.406462982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb89db9fc-bnjpd,Uid:390c0a19-48f4-4797-8196-5ec15c21cefb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.411550 containerd[1551]: time="2025-07-07T00:02:45.411528154Z" level=error msg="Failed to destroy network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.411992 containerd[1551]: time="2025-07-07T00:02:45.411773275Z" level=error msg="encountered an error cleaning up failed sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.411992 containerd[1551]: time="2025-07-07T00:02:45.411801069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c697bf8b4-g6224,Uid:4d25b825-a10e-406a-bd66-7d8888f8d3a4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.430397 kubelet[2751]: E0707 00:02:45.430330 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.430397 kubelet[2751]: E0707 00:02:45.430361 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.434568 kubelet[2751]: E0707 00:02:45.434501 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-m5kcn" Jul 7 00:02:45.434568 kubelet[2751]: E0707 00:02:45.434516 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.439549 kubelet[2751]: E0707 00:02:45.434534 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6n2s9" Jul 7 00:02:45.446839 kubelet[2751]: E0707 00:02:45.446113 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-m5kcn" Jul 7 00:02:45.446839 kubelet[2751]: E0707 00:02:45.446178 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-m5kcn_calico-system(14bb0f88-ca72-41a6-bf6b-278ec258254c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-m5kcn_calico-system(14bb0f88-ca72-41a6-bf6b-278ec258254c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-m5kcn" podUID="14bb0f88-ca72-41a6-bf6b-278ec258254c" Jul 7 00:02:45.446839 kubelet[2751]: E0707 00:02:45.446405 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.446982 kubelet[2751]: E0707 00:02:45.446423 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb89db9fc-b2bvr" Jul 7 00:02:45.446982 kubelet[2751]: E0707 00:02:45.446455 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb89db9fc-b2bvr" Jul 7 00:02:45.446982 kubelet[2751]: E0707 00:02:45.446479 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cb89db9fc-b2bvr_calico-apiserver(1df52e7b-a2e0-4431-9a2a-1ab12c493fd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cb89db9fc-b2bvr_calico-apiserver(1df52e7b-a2e0-4431-9a2a-1ab12c493fd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb89db9fc-b2bvr" podUID="1df52e7b-a2e0-4431-9a2a-1ab12c493fd5" Jul 7 00:02:45.447074 kubelet[2751]: E0707 00:02:45.446499 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.447074 kubelet[2751]: E0707 00:02:45.446526 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-26jsb" Jul 7 00:02:45.447074 kubelet[2751]: E0707 00:02:45.446535 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-26jsb" Jul 7 00:02:45.447159 kubelet[2751]: E0707 00:02:45.446550 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-26jsb_calico-system(7ea19b78-cbc5-4bff-999a-89047d422683)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-26jsb_calico-system(7ea19b78-cbc5-4bff-999a-89047d422683)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-26jsb" podUID="7ea19b78-cbc5-4bff-999a-89047d422683" Jul 7 00:02:45.447159 kubelet[2751]: E0707 00:02:45.446567 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.447159 kubelet[2751]: E0707 00:02:45.446577 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ktt6k" Jul 7 00:02:45.447249 kubelet[2751]: E0707 00:02:45.446596 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ktt6k" Jul 7 00:02:45.447249 kubelet[2751]: E0707 00:02:45.446615 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ktt6k_kube-system(7938ba32-3622-4550-916a-e1e1fa111816)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ktt6k_kube-system(7938ba32-3622-4550-916a-e1e1fa111816)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ktt6k" podUID="7938ba32-3622-4550-916a-e1e1fa111816" Jul 7 00:02:45.447249 kubelet[2751]: E0707 00:02:45.446631 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.447363 kubelet[2751]: E0707 00:02:45.446640 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb89db9fc-bnjpd" Jul 7 00:02:45.447363 kubelet[2751]: E0707 00:02:45.446646 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb89db9fc-bnjpd" Jul 7 00:02:45.447363 kubelet[2751]: E0707 00:02:45.446672 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cb89db9fc-bnjpd_calico-apiserver(390c0a19-48f4-4797-8196-5ec15c21cefb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cb89db9fc-bnjpd_calico-apiserver(390c0a19-48f4-4797-8196-5ec15c21cefb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb89db9fc-bnjpd" podUID="390c0a19-48f4-4797-8196-5ec15c21cefb" Jul 7 00:02:45.447434 kubelet[2751]: E0707 00:02:45.434498 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d6c4dd986-p2dsm" Jul 7 00:02:45.447434 kubelet[2751]: E0707 00:02:45.446690 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d6c4dd986-p2dsm" Jul 7 00:02:45.447434 kubelet[2751]: E0707 00:02:45.446715 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d6c4dd986-p2dsm_calico-system(7ce3da94-1f58-487b-979e-8f10e33da61e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d6c4dd986-p2dsm_calico-system(7ce3da94-1f58-487b-979e-8f10e33da61e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d6c4dd986-p2dsm" podUID="7ce3da94-1f58-487b-979e-8f10e33da61e" Jul 7 00:02:45.447502 kubelet[2751]: E0707 00:02:45.446110 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6n2s9" Jul 7 00:02:45.447502 kubelet[2751]: E0707 00:02:45.446753 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6n2s9_kube-system(b3c62c35-57e3-46c3-83b5-1109b949cad4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6n2s9_kube-system(b3c62c35-57e3-46c3-83b5-1109b949cad4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6n2s9" podUID="b3c62c35-57e3-46c3-83b5-1109b949cad4" Jul 7 00:02:45.447502 kubelet[2751]: E0707 00:02:45.430332 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:45.447564 kubelet[2751]: E0707 00:02:45.446780 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c697bf8b4-g6224" Jul 7 00:02:45.447564 kubelet[2751]: E0707 00:02:45.446788 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c697bf8b4-g6224" Jul 7 00:02:45.447564 kubelet[2751]: E0707 00:02:45.446803 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c697bf8b4-g6224_calico-system(4d25b825-a10e-406a-bd66-7d8888f8d3a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c697bf8b4-g6224_calico-system(4d25b825-a10e-406a-bd66-7d8888f8d3a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c697bf8b4-g6224" podUID="4d25b825-a10e-406a-bd66-7d8888f8d3a4" Jul 7 00:02:46.112680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680-shm.mount: Deactivated successfully. Jul 7 00:02:46.112921 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111-shm.mount: Deactivated successfully. Jul 7 00:02:46.113002 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9-shm.mount: Deactivated successfully. Jul 7 00:02:46.113090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c-shm.mount: Deactivated successfully. Jul 7 00:02:46.175672 kubelet[2751]: I0707 00:02:46.175287 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:02:46.177049 kubelet[2751]: I0707 00:02:46.177022 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:02:46.211171 kubelet[2751]: I0707 00:02:46.210322 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:02:46.213552 kubelet[2751]: I0707 00:02:46.213454 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:02:46.214278 kubelet[2751]: I0707 00:02:46.214266 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:02:46.219240 kubelet[2751]: I0707 00:02:46.219210 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:02:46.222272 kubelet[2751]: I0707 00:02:46.222148 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:02:46.230450 kubelet[2751]: I0707 00:02:46.229516 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:02:46.373410 containerd[1551]: time="2025-07-07T00:02:46.372742325Z" level=info msg="StopPodSandbox for \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\"" Jul 7 00:02:46.373410 containerd[1551]: time="2025-07-07T00:02:46.373262470Z" level=info msg="StopPodSandbox for \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\"" Jul 7 00:02:46.374041 containerd[1551]: time="2025-07-07T00:02:46.373877550Z" level=info msg="Ensure that sandbox df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9 in task-service has been cleanup successfully" Jul 7 00:02:46.374041 containerd[1551]: time="2025-07-07T00:02:46.373928659Z" level=info msg="Ensure that sandbox 812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c in task-service has been cleanup successfully" Jul 7 00:02:46.375221 containerd[1551]: time="2025-07-07T00:02:46.375202101Z" level=info msg="StopPodSandbox for \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\"" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.375372541Z" level=info msg="Ensure that sandbox 3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7 in task-service has been cleanup successfully" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.375610249Z" level=info msg="StopPodSandbox for \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\"" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.375708866Z" level=info msg="Ensure that sandbox 56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74 in task-service has been cleanup successfully" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.376157240Z" level=info msg="StopPodSandbox for \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\"" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.376240955Z" level=info msg="Ensure that sandbox 8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73 in task-service has been cleanup successfully" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.376781093Z" level=info msg="StopPodSandbox for \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\"" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.376922913Z" level=info msg="Ensure that sandbox bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111 in task-service has been cleanup successfully" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.375255188Z" level=info msg="StopPodSandbox for \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\"" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.377773505Z" level=info msg="Ensure that sandbox 8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289 in task-service has been cleanup successfully" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.375272303Z" level=info msg="StopPodSandbox for \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\"" Jul 7 00:02:46.384037 containerd[1551]: time="2025-07-07T00:02:46.377904408Z" level=info msg="Ensure that sandbox d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680 in task-service has been cleanup successfully" Jul 7 00:02:46.478602 containerd[1551]: time="2025-07-07T00:02:46.478525904Z" level=error msg="StopPodSandbox for \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\" failed" error="failed to destroy network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:46.478873 kubelet[2751]: E0707 00:02:46.478681 2751 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:02:46.478873 kubelet[2751]: E0707 00:02:46.478728 2751 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c"} Jul 7 00:02:46.478873 kubelet[2751]: E0707 00:02:46.478766 2751 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ea19b78-cbc5-4bff-999a-89047d422683\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:02:46.478873 kubelet[2751]: E0707 00:02:46.478790 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ea19b78-cbc5-4bff-999a-89047d422683\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-26jsb" podUID="7ea19b78-cbc5-4bff-999a-89047d422683" Jul 7 00:02:46.481143 containerd[1551]: time="2025-07-07T00:02:46.481082864Z" level=error msg="StopPodSandbox for \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\" failed" error="failed to destroy network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:46.481300 kubelet[2751]: E0707 00:02:46.481223 2751 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:02:46.481300 kubelet[2751]: E0707 00:02:46.481253 2751 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7"} Jul 7 00:02:46.481300 kubelet[2751]: E0707 00:02:46.481273 2751 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b3c62c35-57e3-46c3-83b5-1109b949cad4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:02:46.481300 kubelet[2751]: E0707 00:02:46.481288 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b3c62c35-57e3-46c3-83b5-1109b949cad4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6n2s9" podUID="b3c62c35-57e3-46c3-83b5-1109b949cad4" Jul 7 00:02:46.483391 containerd[1551]: time="2025-07-07T00:02:46.483112360Z" level=error msg="StopPodSandbox for \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\" failed" error="failed to destroy network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:46.483391 containerd[1551]: time="2025-07-07T00:02:46.483292953Z" level=error msg="StopPodSandbox for \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\" failed" error="failed to destroy network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:46.483481 kubelet[2751]: E0707 00:02:46.483258 2751 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:02:46.483481 kubelet[2751]: E0707 00:02:46.483296 2751 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74"} Jul 7 00:02:46.483481 kubelet[2751]: E0707 00:02:46.483317 2751 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1df52e7b-a2e0-4431-9a2a-1ab12c493fd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:02:46.483481 kubelet[2751]: E0707 00:02:46.483331 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1df52e7b-a2e0-4431-9a2a-1ab12c493fd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb89db9fc-b2bvr" podUID="1df52e7b-a2e0-4431-9a2a-1ab12c493fd5" Jul 7 00:02:46.483591 kubelet[2751]: E0707 00:02:46.483362 2751 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:02:46.483591 kubelet[2751]: E0707 00:02:46.483384 2751 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111"} Jul 7 00:02:46.483591 kubelet[2751]: E0707 00:02:46.483399 2751 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7938ba32-3622-4550-916a-e1e1fa111816\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:02:46.483591 kubelet[2751]: E0707 00:02:46.483419 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7938ba32-3622-4550-916a-e1e1fa111816\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ktt6k" podUID="7938ba32-3622-4550-916a-e1e1fa111816" Jul 7 00:02:46.488617 containerd[1551]: time="2025-07-07T00:02:46.488595437Z" level=error msg="StopPodSandbox for \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\" failed" error="failed to destroy network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:46.488912 kubelet[2751]: E0707 00:02:46.488819 2751 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:02:46.488912 kubelet[2751]: E0707 00:02:46.488849 2751 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289"} Jul 7 00:02:46.488912 kubelet[2751]: E0707 00:02:46.488868 2751 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"390c0a19-48f4-4797-8196-5ec15c21cefb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:02:46.488912 kubelet[2751]: E0707 00:02:46.488884 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"390c0a19-48f4-4797-8196-5ec15c21cefb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb89db9fc-bnjpd" podUID="390c0a19-48f4-4797-8196-5ec15c21cefb" Jul 7 00:02:46.489171 containerd[1551]: time="2025-07-07T00:02:46.489153696Z" level=error msg="StopPodSandbox for \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\" failed" error="failed to destroy network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:46.489341 kubelet[2751]: E0707 00:02:46.489270 2751 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:02:46.489341 kubelet[2751]: E0707 00:02:46.489288 2751 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680"} Jul 7 00:02:46.489341 kubelet[2751]: E0707 00:02:46.489302 2751 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce3da94-1f58-487b-979e-8f10e33da61e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:02:46.489341 kubelet[2751]: E0707 00:02:46.489324 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce3da94-1f58-487b-979e-8f10e33da61e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d6c4dd986-p2dsm" podUID="7ce3da94-1f58-487b-979e-8f10e33da61e" Jul 7 00:02:46.497300 kubelet[2751]: E0707 00:02:46.489957 2751 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:02:46.497300 kubelet[2751]: E0707 00:02:46.489977 2751 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9"} Jul 7 00:02:46.497300 kubelet[2751]: E0707 00:02:46.489997 2751 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d25b825-a10e-406a-bd66-7d8888f8d3a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:02:46.497300 kubelet[2751]: E0707 00:02:46.490009 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d25b825-a10e-406a-bd66-7d8888f8d3a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c697bf8b4-g6224" podUID="4d25b825-a10e-406a-bd66-7d8888f8d3a4" Jul 7 00:02:46.506229 containerd[1551]: time="2025-07-07T00:02:46.489888373Z" level=error msg="StopPodSandbox for \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\" failed" error="failed to destroy network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:46.506229 containerd[1551]: time="2025-07-07T00:02:46.495726294Z" level=error msg="StopPodSandbox for \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\" failed" error="failed to destroy network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:02:46.506278 kubelet[2751]: E0707 00:02:46.495846 2751 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:02:46.506278 kubelet[2751]: E0707 00:02:46.495873 2751 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73"} Jul 7 00:02:46.506278 kubelet[2751]: E0707 00:02:46.495891 2751 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14bb0f88-ca72-41a6-bf6b-278ec258254c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:02:46.506278 kubelet[2751]: E0707 00:02:46.495904 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14bb0f88-ca72-41a6-bf6b-278ec258254c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-m5kcn" podUID="14bb0f88-ca72-41a6-bf6b-278ec258254c" Jul 7 00:02:48.620029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451128130.mount: Deactivated successfully. Jul 7 00:02:48.713361 containerd[1551]: time="2025-07-07T00:02:48.702672873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:02:48.714159 containerd[1551]: time="2025-07-07T00:02:48.714143740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.543408233s" Jul 7 00:02:48.714329 containerd[1551]: time="2025-07-07T00:02:48.714205748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:02:48.714329 containerd[1551]: time="2025-07-07T00:02:48.714213572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:48.738631 containerd[1551]: time="2025-07-07T00:02:48.737959493Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:48.738631 containerd[1551]: time="2025-07-07T00:02:48.738312378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:48.760851 containerd[1551]: time="2025-07-07T00:02:48.760829387Z" level=info msg="CreateContainer within sandbox \"8eda3d1e2069eed68a17c181e93c9256dac6c883da6859bc6a2aae135c82a494\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:02:48.794285 containerd[1551]: time="2025-07-07T00:02:48.794196209Z" level=info msg="CreateContainer within sandbox \"8eda3d1e2069eed68a17c181e93c9256dac6c883da6859bc6a2aae135c82a494\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5705b7b2a6a88c64e8d153b17788ac4198a761e76802593edfa38cbca088f0f4\"" Jul 7 00:02:48.795110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2750351110.mount: Deactivated successfully. Jul 7 00:02:48.796621 containerd[1551]: time="2025-07-07T00:02:48.795243041Z" level=info msg="StartContainer for \"5705b7b2a6a88c64e8d153b17788ac4198a761e76802593edfa38cbca088f0f4\"" Jul 7 00:02:48.853452 systemd[1]: Started cri-containerd-5705b7b2a6a88c64e8d153b17788ac4198a761e76802593edfa38cbca088f0f4.scope - libcontainer container 5705b7b2a6a88c64e8d153b17788ac4198a761e76802593edfa38cbca088f0f4. Jul 7 00:02:48.876493 containerd[1551]: time="2025-07-07T00:02:48.876193495Z" level=info msg="StartContainer for \"5705b7b2a6a88c64e8d153b17788ac4198a761e76802593edfa38cbca088f0f4\" returns successfully" Jul 7 00:02:48.955641 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:02:48.960545 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:02:49.261850 kubelet[2751]: I0707 00:02:49.261397 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ssn47" podStartSLOduration=1.7328314219999998 podStartE2EDuration="19.258546638s" podCreationTimestamp="2025-07-07 00:02:30 +0000 UTC" firstStartedPulling="2025-07-07 00:02:31.212927952 +0000 UTC m=+16.786355964" lastFinishedPulling="2025-07-07 00:02:48.738643172 +0000 UTC m=+34.312071180" observedRunningTime="2025-07-07 00:02:49.24932584 +0000 UTC m=+34.822753858" watchObservedRunningTime="2025-07-07 00:02:49.258546638 +0000 UTC m=+34.831974651" Jul 7 00:02:49.384454 containerd[1551]: time="2025-07-07T00:02:49.384215032Z" level=info msg="StopPodSandbox for \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\"" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.463 [INFO][4046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.466 [INFO][4046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" iface="eth0" netns="/var/run/netns/cni-c7c003e5-ada3-9542-6079-ef2bcc2638c5" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.466 [INFO][4046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" iface="eth0" netns="/var/run/netns/cni-c7c003e5-ada3-9542-6079-ef2bcc2638c5" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.467 [INFO][4046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" iface="eth0" netns="/var/run/netns/cni-c7c003e5-ada3-9542-6079-ef2bcc2638c5" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.467 [INFO][4046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.467 [INFO][4046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.695 [INFO][4054] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" HandleID="k8s-pod-network.d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Workload="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.699 [INFO][4054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.699 [INFO][4054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.712 [WARNING][4054] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" HandleID="k8s-pod-network.d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Workload="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.712 [INFO][4054] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" HandleID="k8s-pod-network.d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Workload="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.713 [INFO][4054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:49.717022 containerd[1551]: 2025-07-07 00:02:49.715 [INFO][4046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:02:49.720396 containerd[1551]: time="2025-07-07T00:02:49.719373885Z" level=info msg="TearDown network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\" successfully" Jul 7 00:02:49.720396 containerd[1551]: time="2025-07-07T00:02:49.719396802Z" level=info msg="StopPodSandbox for \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\" returns successfully" Jul 7 00:02:49.718983 systemd[1]: run-netns-cni\x2dc7c003e5\x2dada3\x2d9542\x2d6079\x2def2bcc2638c5.mount: Deactivated successfully. Jul 7 00:02:49.923133 kubelet[2751]: I0707 00:02:49.923094 2751 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ce3da94-1f58-487b-979e-8f10e33da61e-whisker-backend-key-pair\") pod \"7ce3da94-1f58-487b-979e-8f10e33da61e\" (UID: \"7ce3da94-1f58-487b-979e-8f10e33da61e\") " Jul 7 00:02:49.923244 kubelet[2751]: I0707 00:02:49.923177 2751 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjkcc\" (UniqueName: \"kubernetes.io/projected/7ce3da94-1f58-487b-979e-8f10e33da61e-kube-api-access-tjkcc\") pod \"7ce3da94-1f58-487b-979e-8f10e33da61e\" (UID: \"7ce3da94-1f58-487b-979e-8f10e33da61e\") " Jul 7 00:02:49.923244 kubelet[2751]: I0707 00:02:49.923206 2751 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ce3da94-1f58-487b-979e-8f10e33da61e-whisker-ca-bundle\") pod \"7ce3da94-1f58-487b-979e-8f10e33da61e\" (UID: \"7ce3da94-1f58-487b-979e-8f10e33da61e\") " Jul 7 00:02:49.947757 systemd[1]: var-lib-kubelet-pods-7ce3da94\x2d1f58\x2d487b\x2d979e\x2d8f10e33da61e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:02:49.963171 systemd[1]: var-lib-kubelet-pods-7ce3da94\x2d1f58\x2d487b\x2d979e\x2d8f10e33da61e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtjkcc.mount: Deactivated successfully. Jul 7 00:02:49.970825 kubelet[2751]: I0707 00:02:49.970767 2751 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce3da94-1f58-487b-979e-8f10e33da61e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7ce3da94-1f58-487b-979e-8f10e33da61e" (UID: "7ce3da94-1f58-487b-979e-8f10e33da61e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:02:49.970825 kubelet[2751]: I0707 00:02:49.970807 2751 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce3da94-1f58-487b-979e-8f10e33da61e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7ce3da94-1f58-487b-979e-8f10e33da61e" (UID: "7ce3da94-1f58-487b-979e-8f10e33da61e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:02:49.974468 kubelet[2751]: I0707 00:02:49.962968 2751 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce3da94-1f58-487b-979e-8f10e33da61e-kube-api-access-tjkcc" (OuterVolumeSpecName: "kube-api-access-tjkcc") pod "7ce3da94-1f58-487b-979e-8f10e33da61e" (UID: "7ce3da94-1f58-487b-979e-8f10e33da61e"). InnerVolumeSpecName "kube-api-access-tjkcc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:02:50.023620 kubelet[2751]: I0707 00:02:50.023590 2751 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tjkcc\" (UniqueName: \"kubernetes.io/projected/7ce3da94-1f58-487b-979e-8f10e33da61e-kube-api-access-tjkcc\") on node \"localhost\" DevicePath \"\"" Jul 7 00:02:50.023620 kubelet[2751]: I0707 00:02:50.023615 2751 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ce3da94-1f58-487b-979e-8f10e33da61e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 00:02:50.023817 kubelet[2751]: I0707 00:02:50.023636 2751 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ce3da94-1f58-487b-979e-8f10e33da61e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 00:02:50.246315 systemd[1]: Removed slice kubepods-besteffort-pod7ce3da94_1f58_487b_979e_8f10e33da61e.slice - libcontainer container kubepods-besteffort-pod7ce3da94_1f58_487b_979e_8f10e33da61e.slice. Jul 7 00:02:50.346522 systemd[1]: Created slice kubepods-besteffort-pod705f2e67_e784_4f06_8996_2d628e6fb0a0.slice - libcontainer container kubepods-besteffort-pod705f2e67_e784_4f06_8996_2d628e6fb0a0.slice. Jul 7 00:02:50.430362 kubelet[2751]: I0707 00:02:50.430333 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/705f2e67-e784-4f06-8996-2d628e6fb0a0-whisker-ca-bundle\") pod \"whisker-69876bbdcd-p5mb5\" (UID: \"705f2e67-e784-4f06-8996-2d628e6fb0a0\") " pod="calico-system/whisker-69876bbdcd-p5mb5" Jul 7 00:02:50.430362 kubelet[2751]: I0707 00:02:50.430358 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnwvj\" (UniqueName: \"kubernetes.io/projected/705f2e67-e784-4f06-8996-2d628e6fb0a0-kube-api-access-qnwvj\") pod \"whisker-69876bbdcd-p5mb5\" (UID: \"705f2e67-e784-4f06-8996-2d628e6fb0a0\") " pod="calico-system/whisker-69876bbdcd-p5mb5" Jul 7 00:02:50.430677 kubelet[2751]: I0707 00:02:50.430371 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/705f2e67-e784-4f06-8996-2d628e6fb0a0-whisker-backend-key-pair\") pod \"whisker-69876bbdcd-p5mb5\" (UID: \"705f2e67-e784-4f06-8996-2d628e6fb0a0\") " pod="calico-system/whisker-69876bbdcd-p5mb5" Jul 7 00:02:50.615152 kubelet[2751]: I0707 00:02:50.615133 2751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce3da94-1f58-487b-979e-8f10e33da61e" path="/var/lib/kubelet/pods/7ce3da94-1f58-487b-979e-8f10e33da61e/volumes" Jul 7 00:02:50.621395 systemd[1]: run-containerd-runc-k8s.io-5705b7b2a6a88c64e8d153b17788ac4198a761e76802593edfa38cbca088f0f4-runc.tsTnt9.mount: Deactivated successfully. Jul 7 00:02:50.652380 containerd[1551]: time="2025-07-07T00:02:50.652346789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69876bbdcd-p5mb5,Uid:705f2e67-e784-4f06-8996-2d628e6fb0a0,Namespace:calico-system,Attempt:0,}" Jul 7 00:02:50.760024 systemd-networkd[1462]: cali941c2a65ee7: Link UP Jul 7 00:02:50.760730 systemd-networkd[1462]: cali941c2a65ee7: Gained carrier Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.689 [INFO][4182] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.697 [INFO][4182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--69876bbdcd--p5mb5-eth0 whisker-69876bbdcd- calico-system 705f2e67-e784-4f06-8996-2d628e6fb0a0 882 0 2025-07-07 00:02:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69876bbdcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-69876bbdcd-p5mb5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali941c2a65ee7 [] [] }} ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Namespace="calico-system" Pod="whisker-69876bbdcd-p5mb5" WorkloadEndpoint="localhost-k8s-whisker--69876bbdcd--p5mb5-" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.697 [INFO][4182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Namespace="calico-system" Pod="whisker-69876bbdcd-p5mb5" WorkloadEndpoint="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.716 [INFO][4190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" HandleID="k8s-pod-network.529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Workload="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.717 [INFO][4190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" HandleID="k8s-pod-network.529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Workload="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-69876bbdcd-p5mb5", "timestamp":"2025-07-07 00:02:50.716406153 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.717 [INFO][4190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.717 [INFO][4190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.717 [INFO][4190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.724 [INFO][4190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" host="localhost" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.732 [INFO][4190] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.735 [INFO][4190] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.737 [INFO][4190] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.739 [INFO][4190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.739 [INFO][4190] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" host="localhost" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.740 [INFO][4190] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.742 [INFO][4190] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" host="localhost" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.745 [INFO][4190] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" host="localhost" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.745 [INFO][4190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" host="localhost" Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.746 [INFO][4190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:50.773976 containerd[1551]: 2025-07-07 00:02:50.746 [INFO][4190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" HandleID="k8s-pod-network.529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Workload="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" Jul 7 00:02:50.777049 containerd[1551]: 2025-07-07 00:02:50.748 [INFO][4182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Namespace="calico-system" Pod="whisker-69876bbdcd-p5mb5" WorkloadEndpoint="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69876bbdcd--p5mb5-eth0", GenerateName:"whisker-69876bbdcd-", Namespace:"calico-system", SelfLink:"", UID:"705f2e67-e784-4f06-8996-2d628e6fb0a0", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69876bbdcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-69876bbdcd-p5mb5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali941c2a65ee7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:50.777049 containerd[1551]: 2025-07-07 00:02:50.748 [INFO][4182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Namespace="calico-system" Pod="whisker-69876bbdcd-p5mb5" WorkloadEndpoint="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" Jul 7 00:02:50.777049 containerd[1551]: 2025-07-07 00:02:50.748 [INFO][4182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali941c2a65ee7 ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Namespace="calico-system" Pod="whisker-69876bbdcd-p5mb5" WorkloadEndpoint="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" Jul 7 00:02:50.777049 containerd[1551]: 2025-07-07 00:02:50.762 [INFO][4182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Namespace="calico-system" Pod="whisker-69876bbdcd-p5mb5" WorkloadEndpoint="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" Jul 7 00:02:50.777049 containerd[1551]: 2025-07-07 00:02:50.762 [INFO][4182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Namespace="calico-system" Pod="whisker-69876bbdcd-p5mb5" WorkloadEndpoint="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69876bbdcd--p5mb5-eth0", GenerateName:"whisker-69876bbdcd-", Namespace:"calico-system", SelfLink:"", UID:"705f2e67-e784-4f06-8996-2d628e6fb0a0", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69876bbdcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d", Pod:"whisker-69876bbdcd-p5mb5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali941c2a65ee7", MAC:"06:b4:70:95:34:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:50.777049 containerd[1551]: 2025-07-07 00:02:50.771 [INFO][4182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d" Namespace="calico-system" Pod="whisker-69876bbdcd-p5mb5" WorkloadEndpoint="localhost-k8s-whisker--69876bbdcd--p5mb5-eth0" Jul 7 00:02:50.795782 containerd[1551]: time="2025-07-07T00:02:50.795259293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:50.795782 containerd[1551]: time="2025-07-07T00:02:50.795684718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:50.795782 containerd[1551]: time="2025-07-07T00:02:50.795694410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:50.795782 containerd[1551]: time="2025-07-07T00:02:50.795760408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:50.821272 systemd[1]: Started cri-containerd-529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d.scope - libcontainer container 529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d. Jul 7 00:02:50.829409 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:02:50.858032 containerd[1551]: time="2025-07-07T00:02:50.857975145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69876bbdcd-p5mb5,Uid:705f2e67-e784-4f06-8996-2d628e6fb0a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d\"" Jul 7 00:02:50.859263 containerd[1551]: time="2025-07-07T00:02:50.859113017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:02:52.115714 containerd[1551]: time="2025-07-07T00:02:52.115687517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:52.116583 containerd[1551]: time="2025-07-07T00:02:52.116559848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:02:52.117287 containerd[1551]: time="2025-07-07T00:02:52.116992320Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:52.118096 containerd[1551]: time="2025-07-07T00:02:52.118078922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:52.118656 containerd[1551]: time="2025-07-07T00:02:52.118639918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.259492057s" Jul 7 00:02:52.118722 containerd[1551]: time="2025-07-07T00:02:52.118712549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:02:52.121239 containerd[1551]: time="2025-07-07T00:02:52.121218318Z" level=info msg="CreateContainer within sandbox \"529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:02:52.129073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417964885.mount: Deactivated successfully. Jul 7 00:02:52.130210 containerd[1551]: time="2025-07-07T00:02:52.129323444Z" level=info msg="CreateContainer within sandbox \"529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1db7261c44331503f6beec6807c2e9baf6aa33b3191c222addb7aca20fa823f9\"" Jul 7 00:02:52.130210 containerd[1551]: time="2025-07-07T00:02:52.129648349Z" level=info msg="StartContainer for \"1db7261c44331503f6beec6807c2e9baf6aa33b3191c222addb7aca20fa823f9\"" Jul 7 00:02:52.153291 systemd[1]: Started cri-containerd-1db7261c44331503f6beec6807c2e9baf6aa33b3191c222addb7aca20fa823f9.scope - libcontainer container 1db7261c44331503f6beec6807c2e9baf6aa33b3191c222addb7aca20fa823f9. Jul 7 00:02:52.182872 containerd[1551]: time="2025-07-07T00:02:52.182846352Z" level=info msg="StartContainer for \"1db7261c44331503f6beec6807c2e9baf6aa33b3191c222addb7aca20fa823f9\" returns successfully" Jul 7 00:02:52.183779 containerd[1551]: time="2025-07-07T00:02:52.183714727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:02:52.660210 systemd-networkd[1462]: cali941c2a65ee7: Gained IPv6LL Jul 7 00:02:54.110201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956932667.mount: Deactivated successfully. Jul 7 00:02:54.237632 containerd[1551]: time="2025-07-07T00:02:54.237414462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:54.238203 containerd[1551]: time="2025-07-07T00:02:54.237942711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:02:54.238203 containerd[1551]: time="2025-07-07T00:02:54.238048643Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:54.239218 containerd[1551]: time="2025-07-07T00:02:54.239192780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:54.239929 containerd[1551]: time="2025-07-07T00:02:54.239627876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.055883392s" Jul 7 00:02:54.239929 containerd[1551]: time="2025-07-07T00:02:54.239646871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:02:54.242291 containerd[1551]: time="2025-07-07T00:02:54.242265236Z" level=info msg="CreateContainer within sandbox \"529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:02:54.257777 containerd[1551]: time="2025-07-07T00:02:54.257708938Z" level=info msg="CreateContainer within sandbox \"529604f00ca03edaf6b8fdf5475b7ab8c72e8a44501beeb00a8849d79cbf479d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7099949331d67c9b4251a334c0f72c6d4c14b061bbfdc054a0e66d26b534b98e\"" Jul 7 00:02:54.258100 containerd[1551]: time="2025-07-07T00:02:54.258087631Z" level=info msg="StartContainer for \"7099949331d67c9b4251a334c0f72c6d4c14b061bbfdc054a0e66d26b534b98e\"" Jul 7 00:02:54.279266 systemd[1]: Started cri-containerd-7099949331d67c9b4251a334c0f72c6d4c14b061bbfdc054a0e66d26b534b98e.scope - libcontainer container 7099949331d67c9b4251a334c0f72c6d4c14b061bbfdc054a0e66d26b534b98e. Jul 7 00:02:54.310956 containerd[1551]: time="2025-07-07T00:02:54.310923276Z" level=info msg="StartContainer for \"7099949331d67c9b4251a334c0f72c6d4c14b061bbfdc054a0e66d26b534b98e\" returns successfully" Jul 7 00:02:57.615634 containerd[1551]: time="2025-07-07T00:02:57.615455629Z" level=info msg="StopPodSandbox for \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\"" Jul 7 00:02:57.615918 containerd[1551]: time="2025-07-07T00:02:57.615774725Z" level=info msg="StopPodSandbox for \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\"" Jul 7 00:02:57.653351 kubelet[2751]: I0707 00:02:57.653302 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-69876bbdcd-p5mb5" podStartSLOduration=4.272208729 podStartE2EDuration="7.653290707s" podCreationTimestamp="2025-07-07 00:02:50 +0000 UTC" firstStartedPulling="2025-07-07 00:02:50.858928433 +0000 UTC m=+36.432356442" lastFinishedPulling="2025-07-07 00:02:54.240010411 +0000 UTC m=+39.813438420" observedRunningTime="2025-07-07 00:02:55.284422236 +0000 UTC m=+40.857850250" watchObservedRunningTime="2025-07-07 00:02:57.653290707 +0000 UTC m=+43.226718719" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.652 [INFO][4470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4470] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" iface="eth0" netns="/var/run/netns/cni-bef0f0c0-edf7-c7a2-eff4-c63f67a858ed" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4470] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" iface="eth0" netns="/var/run/netns/cni-bef0f0c0-edf7-c7a2-eff4-c63f67a858ed" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4470] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" iface="eth0" netns="/var/run/netns/cni-bef0f0c0-edf7-c7a2-eff4-c63f67a858ed" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.676 [INFO][4488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" HandleID="k8s-pod-network.56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.676 [INFO][4488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.676 [INFO][4488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.680 [WARNING][4488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" HandleID="k8s-pod-network.56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.680 [INFO][4488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" HandleID="k8s-pod-network.56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.681 [INFO][4488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:57.685702 containerd[1551]: 2025-07-07 00:02:57.683 [INFO][4470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:02:57.688676 containerd[1551]: time="2025-07-07T00:02:57.687709260Z" level=info msg="TearDown network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\" successfully" Jul 7 00:02:57.688676 containerd[1551]: time="2025-07-07T00:02:57.687728704Z" level=info msg="StopPodSandbox for \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\" returns successfully" Jul 7 00:02:57.688676 containerd[1551]: time="2025-07-07T00:02:57.688394317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb89db9fc-b2bvr,Uid:1df52e7b-a2e0-4431-9a2a-1ab12c493fd5,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:02:57.687364 systemd[1]: run-netns-cni\x2dbef0f0c0\x2dedf7\x2dc7a2\x2deff4\x2dc63f67a858ed.mount: Deactivated successfully. Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.652 [INFO][4477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4477] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" iface="eth0" netns="/var/run/netns/cni-ce876caa-892c-4495-5942-87648aef15ee" Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4477] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" iface="eth0" netns="/var/run/netns/cni-ce876caa-892c-4495-5942-87648aef15ee" Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4477] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" iface="eth0" netns="/var/run/netns/cni-ce876caa-892c-4495-5942-87648aef15ee" Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.654 [INFO][4477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.679 [INFO][4489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" HandleID="k8s-pod-network.3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.680 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.681 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.688 [WARNING][4489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" HandleID="k8s-pod-network.3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.689 [INFO][4489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" HandleID="k8s-pod-network.3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.690 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:57.693817 containerd[1551]: 2025-07-07 00:02:57.692 [INFO][4477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:02:57.697415 containerd[1551]: time="2025-07-07T00:02:57.694195541Z" level=info msg="TearDown network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\" successfully" Jul 7 00:02:57.697415 containerd[1551]: time="2025-07-07T00:02:57.694211367Z" level=info msg="StopPodSandbox for \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\" returns successfully" Jul 7 00:02:57.697415 containerd[1551]: time="2025-07-07T00:02:57.696969592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6n2s9,Uid:b3c62c35-57e3-46c3-83b5-1109b949cad4,Namespace:kube-system,Attempt:1,}" Jul 7 00:02:57.696003 systemd[1]: run-netns-cni\x2dce876caa\x2d892c\x2d4495\x2d5942\x2d87648aef15ee.mount: Deactivated successfully. Jul 7 00:02:57.774220 systemd-networkd[1462]: califb11e26d09e: Link UP Jul 7 00:02:57.774618 systemd-networkd[1462]: califb11e26d09e: Gained carrier Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.722 [INFO][4509] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.730 [INFO][4509] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0 coredns-674b8bbfcf- kube-system b3c62c35-57e3-46c3-83b5-1109b949cad4 920 0 2025-07-07 00:02:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6n2s9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califb11e26d09e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6n2s9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6n2s9-" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.730 [INFO][4509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6n2s9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.749 [INFO][4525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" HandleID="k8s-pod-network.412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.749 [INFO][4525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" HandleID="k8s-pod-network.412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5100), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6n2s9", "timestamp":"2025-07-07 00:02:57.749416525 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.749 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.749 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.749 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.754 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" host="localhost" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.757 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.761 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.762 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.763 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.763 [INFO][4525] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" host="localhost" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.764 [INFO][4525] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.766 [INFO][4525] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" host="localhost" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.769 [INFO][4525] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" host="localhost" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.770 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" host="localhost" Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.770 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:57.785783 containerd[1551]: 2025-07-07 00:02:57.770 [INFO][4525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" HandleID="k8s-pod-network.412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.787473 containerd[1551]: 2025-07-07 00:02:57.772 [INFO][4509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6n2s9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b3c62c35-57e3-46c3-83b5-1109b949cad4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6n2s9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb11e26d09e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:57.787473 containerd[1551]: 2025-07-07 00:02:57.772 [INFO][4509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6n2s9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.787473 containerd[1551]: 2025-07-07 00:02:57.772 [INFO][4509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb11e26d09e ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6n2s9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.787473 containerd[1551]: 2025-07-07 00:02:57.774 [INFO][4509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6n2s9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.787473 containerd[1551]: 2025-07-07 00:02:57.775 [INFO][4509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6n2s9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b3c62c35-57e3-46c3-83b5-1109b949cad4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b", Pod:"coredns-674b8bbfcf-6n2s9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb11e26d09e", MAC:"c6:d6:ac:26:36:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:57.787473 containerd[1551]: 2025-07-07 00:02:57.783 [INFO][4509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b" Namespace="kube-system" Pod="coredns-674b8bbfcf-6n2s9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:02:57.804746 containerd[1551]: time="2025-07-07T00:02:57.804628102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:57.804746 containerd[1551]: time="2025-07-07T00:02:57.804664228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:57.804746 containerd[1551]: time="2025-07-07T00:02:57.804674150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:57.804746 containerd[1551]: time="2025-07-07T00:02:57.804731350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:57.819345 systemd[1]: Started cri-containerd-412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b.scope - libcontainer container 412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b. Jul 7 00:02:57.828234 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:02:57.851780 containerd[1551]: time="2025-07-07T00:02:57.851746454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6n2s9,Uid:b3c62c35-57e3-46c3-83b5-1109b949cad4,Namespace:kube-system,Attempt:1,} returns sandbox id \"412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b\"" Jul 7 00:02:57.858912 containerd[1551]: time="2025-07-07T00:02:57.858765064Z" level=info msg="CreateContainer within sandbox \"412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:02:57.879570 containerd[1551]: time="2025-07-07T00:02:57.879442090Z" level=info msg="CreateContainer within sandbox \"412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a6fc4f6a6d051e09dcd1ac4ec5af4dcb14736edc64bb6b648b155a4f2d2d97b\"" Jul 7 00:02:57.883582 containerd[1551]: time="2025-07-07T00:02:57.882534868Z" level=info msg="StartContainer for \"4a6fc4f6a6d051e09dcd1ac4ec5af4dcb14736edc64bb6b648b155a4f2d2d97b\"" Jul 7 00:02:57.890549 systemd-networkd[1462]: cali5b87182df7e: Link UP Jul 7 00:02:57.891804 systemd-networkd[1462]: cali5b87182df7e: Gained carrier Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.722 [INFO][4501] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.729 [INFO][4501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0 calico-apiserver-7cb89db9fc- calico-apiserver 1df52e7b-a2e0-4431-9a2a-1ab12c493fd5 921 0 2025-07-07 00:02:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cb89db9fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cb89db9fc-b2bvr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5b87182df7e [] [] }} ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-b2bvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.730 [INFO][4501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-b2bvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.754 [INFO][4527] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" HandleID="k8s-pod-network.9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.754 [INFO][4527] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" HandleID="k8s-pod-network.9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cb89db9fc-b2bvr", "timestamp":"2025-07-07 00:02:57.754386247 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.754 [INFO][4527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.770 [INFO][4527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.770 [INFO][4527] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.856 [INFO][4527] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" host="localhost" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.866 [INFO][4527] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.872 [INFO][4527] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.873 [INFO][4527] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.875 [INFO][4527] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.875 [INFO][4527] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" host="localhost" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.876 [INFO][4527] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51 Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.880 [INFO][4527] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" host="localhost" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.887 [INFO][4527] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" host="localhost" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.887 [INFO][4527] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" host="localhost" Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.887 [INFO][4527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:57.906387 containerd[1551]: 2025-07-07 00:02:57.887 [INFO][4527] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" HandleID="k8s-pod-network.9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.908648 containerd[1551]: 2025-07-07 00:02:57.888 [INFO][4501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-b2bvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0", GenerateName:"calico-apiserver-7cb89db9fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1df52e7b-a2e0-4431-9a2a-1ab12c493fd5", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb89db9fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cb89db9fc-b2bvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b87182df7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:57.908648 containerd[1551]: 2025-07-07 00:02:57.889 [INFO][4501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-b2bvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.908648 containerd[1551]: 2025-07-07 00:02:57.889 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b87182df7e ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-b2bvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.908648 containerd[1551]: 2025-07-07 00:02:57.892 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-b2bvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.908648 containerd[1551]: 2025-07-07 00:02:57.893 [INFO][4501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-b2bvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0", GenerateName:"calico-apiserver-7cb89db9fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1df52e7b-a2e0-4431-9a2a-1ab12c493fd5", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb89db9fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51", Pod:"calico-apiserver-7cb89db9fc-b2bvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b87182df7e", MAC:"1a:24:97:60:98:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:57.908648 containerd[1551]: 2025-07-07 00:02:57.902 [INFO][4501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-b2bvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:02:57.920287 systemd[1]: Started cri-containerd-4a6fc4f6a6d051e09dcd1ac4ec5af4dcb14736edc64bb6b648b155a4f2d2d97b.scope - libcontainer container 4a6fc4f6a6d051e09dcd1ac4ec5af4dcb14736edc64bb6b648b155a4f2d2d97b. Jul 7 00:02:57.935377 containerd[1551]: time="2025-07-07T00:02:57.935321304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:57.935377 containerd[1551]: time="2025-07-07T00:02:57.935357449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:57.935377 containerd[1551]: time="2025-07-07T00:02:57.935368301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:57.936978 containerd[1551]: time="2025-07-07T00:02:57.935986738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:57.956371 systemd[1]: Started cri-containerd-9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51.scope - libcontainer container 9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51. Jul 7 00:02:57.964047 containerd[1551]: time="2025-07-07T00:02:57.964019947Z" level=info msg="StartContainer for \"4a6fc4f6a6d051e09dcd1ac4ec5af4dcb14736edc64bb6b648b155a4f2d2d97b\" returns successfully" Jul 7 00:02:57.972944 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:02:58.007967 containerd[1551]: time="2025-07-07T00:02:58.007934270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb89db9fc-b2bvr,Uid:1df52e7b-a2e0-4431-9a2a-1ab12c493fd5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51\"" Jul 7 00:02:58.010487 containerd[1551]: time="2025-07-07T00:02:58.010457915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:02:58.614902 containerd[1551]: time="2025-07-07T00:02:58.614714285Z" level=info msg="StopPodSandbox for \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\"" Jul 7 00:02:58.647245 kubelet[2751]: I0707 00:02:58.647211 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6n2s9" podStartSLOduration=38.647198242 podStartE2EDuration="38.647198242s" podCreationTimestamp="2025-07-07 00:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:58.295852752 +0000 UTC m=+43.869280771" watchObservedRunningTime="2025-07-07 00:02:58.647198242 +0000 UTC m=+44.220626256" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.647 [INFO][4707] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.647 [INFO][4707] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" iface="eth0" netns="/var/run/netns/cni-84afa884-b1fa-6656-8505-28fd821c5a29" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.647 [INFO][4707] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" iface="eth0" netns="/var/run/netns/cni-84afa884-b1fa-6656-8505-28fd821c5a29" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.648 [INFO][4707] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" iface="eth0" netns="/var/run/netns/cni-84afa884-b1fa-6656-8505-28fd821c5a29" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.648 [INFO][4707] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.648 [INFO][4707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.661 [INFO][4714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" HandleID="k8s-pod-network.bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.662 [INFO][4714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.662 [INFO][4714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.666 [WARNING][4714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" HandleID="k8s-pod-network.bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.666 [INFO][4714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" HandleID="k8s-pod-network.bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.666 [INFO][4714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:58.669746 containerd[1551]: 2025-07-07 00:02:58.668 [INFO][4707] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:02:58.669746 containerd[1551]: time="2025-07-07T00:02:58.669671960Z" level=info msg="TearDown network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\" successfully" Jul 7 00:02:58.669746 containerd[1551]: time="2025-07-07T00:02:58.669698446Z" level=info msg="StopPodSandbox for \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\" returns successfully" Jul 7 00:02:58.671449 containerd[1551]: time="2025-07-07T00:02:58.670303061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ktt6k,Uid:7938ba32-3622-4550-916a-e1e1fa111816,Namespace:kube-system,Attempt:1,}" Jul 7 00:02:58.688692 systemd[1]: run-netns-cni\x2d84afa884\x2db1fa\x2d6656\x2d8505\x2d28fd821c5a29.mount: Deactivated successfully. Jul 7 00:02:58.796954 systemd-networkd[1462]: cali2c9b2879a11: Link UP Jul 7 00:02:58.797092 systemd-networkd[1462]: cali2c9b2879a11: Gained carrier Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.732 [INFO][4722] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.741 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0 coredns-674b8bbfcf- kube-system 7938ba32-3622-4550-916a-e1e1fa111816 939 0 2025-07-07 00:02:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ktt6k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c9b2879a11 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktt6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktt6k-" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.741 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktt6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.757 [INFO][4734] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" HandleID="k8s-pod-network.b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.757 [INFO][4734] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" HandleID="k8s-pod-network.b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ktt6k", "timestamp":"2025-07-07 00:02:58.757573335 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.757 [INFO][4734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.757 [INFO][4734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.757 [INFO][4734] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.764 [INFO][4734] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" host="localhost" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.767 [INFO][4734] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.770 [INFO][4734] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.772 [INFO][4734] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.773 [INFO][4734] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.773 [INFO][4734] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" host="localhost" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.774 [INFO][4734] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.777 [INFO][4734] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" host="localhost" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.788 [INFO][4734] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" host="localhost" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.789 [INFO][4734] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" host="localhost" Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.789 [INFO][4734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:58.817379 containerd[1551]: 2025-07-07 00:02:58.789 [INFO][4734] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" HandleID="k8s-pod-network.b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.818358 containerd[1551]: 2025-07-07 00:02:58.794 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktt6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7938ba32-3622-4550-916a-e1e1fa111816", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ktt6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c9b2879a11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:58.818358 containerd[1551]: 2025-07-07 00:02:58.794 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktt6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.818358 containerd[1551]: 2025-07-07 00:02:58.794 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c9b2879a11 ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktt6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.818358 containerd[1551]: 2025-07-07 00:02:58.798 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktt6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.818358 containerd[1551]: 2025-07-07 00:02:58.800 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktt6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7938ba32-3622-4550-916a-e1e1fa111816", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c", Pod:"coredns-674b8bbfcf-ktt6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c9b2879a11", MAC:"aa:c0:2b:66:5a:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:02:58.818358 containerd[1551]: 2025-07-07 00:02:58.814 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ktt6k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:02:58.865754 containerd[1551]: time="2025-07-07T00:02:58.865315446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:02:58.865754 containerd[1551]: time="2025-07-07T00:02:58.865356394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:02:58.865754 containerd[1551]: time="2025-07-07T00:02:58.865377202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:58.865754 containerd[1551]: time="2025-07-07T00:02:58.865458259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:02:58.905285 systemd[1]: Started cri-containerd-b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c.scope - libcontainer container b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c. Jul 7 00:02:58.913711 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:02:58.934426 containerd[1551]: time="2025-07-07T00:02:58.934403673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ktt6k,Uid:7938ba32-3622-4550-916a-e1e1fa111816,Namespace:kube-system,Attempt:1,} returns sandbox id \"b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c\"" Jul 7 00:02:58.937893 containerd[1551]: time="2025-07-07T00:02:58.937806137Z" level=info msg="CreateContainer within sandbox \"b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:02:58.943663 containerd[1551]: time="2025-07-07T00:02:58.943492797Z" level=info msg="CreateContainer within sandbox \"b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ec017f8935e2152310835174c1404875b5d73930d9748a39408fa7d3b36b963\"" Jul 7 00:02:58.944252 containerd[1551]: time="2025-07-07T00:02:58.944192558Z" level=info msg="StartContainer for \"0ec017f8935e2152310835174c1404875b5d73930d9748a39408fa7d3b36b963\"" Jul 7 00:02:58.971312 systemd[1]: Started cri-containerd-0ec017f8935e2152310835174c1404875b5d73930d9748a39408fa7d3b36b963.scope - libcontainer container 0ec017f8935e2152310835174c1404875b5d73930d9748a39408fa7d3b36b963. Jul 7 00:02:58.994486 containerd[1551]: time="2025-07-07T00:02:58.994459401Z" level=info msg="StartContainer for \"0ec017f8935e2152310835174c1404875b5d73930d9748a39408fa7d3b36b963\" returns successfully" Jul 7 00:02:59.347648 kubelet[2751]: I0707 00:02:59.347536 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ktt6k" podStartSLOduration=39.347517766 podStartE2EDuration="39.347517766s" podCreationTimestamp="2025-07-07 00:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:59.34721877 +0000 UTC m=+44.920646784" watchObservedRunningTime="2025-07-07 00:02:59.347517766 +0000 UTC m=+44.920945779" Jul 7 00:02:59.614918 containerd[1551]: time="2025-07-07T00:02:59.614544464Z" level=info msg="StopPodSandbox for \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\"" Jul 7 00:02:59.632368 containerd[1551]: time="2025-07-07T00:02:59.632344365Z" level=info msg="StopPodSandbox for \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\"" Jul 7 00:02:59.648517 containerd[1551]: time="2025-07-07T00:02:59.633289783Z" level=info msg="StopPodSandbox for \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\"" Jul 7 00:02:59.648517 containerd[1551]: time="2025-07-07T00:02:59.634681896Z" level=info msg="StopPodSandbox for \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\"" Jul 7 00:02:59.636470 systemd-networkd[1462]: califb11e26d09e: Gained IPv6LL Jul 7 00:02:59.764262 systemd-networkd[1462]: cali5b87182df7e: Gained IPv6LL Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.730 [INFO][4884] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.731 [INFO][4884] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" iface="eth0" netns="/var/run/netns/cni-6c8dbaeb-cffc-8f0a-329d-04f4f08206ed" Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.731 [INFO][4884] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" iface="eth0" netns="/var/run/netns/cni-6c8dbaeb-cffc-8f0a-329d-04f4f08206ed" Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.731 [INFO][4884] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" iface="eth0" netns="/var/run/netns/cni-6c8dbaeb-cffc-8f0a-329d-04f4f08206ed" Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.731 [INFO][4884] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.731 [INFO][4884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.750 [INFO][4908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" HandleID="k8s-pod-network.8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.750 [INFO][4908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.751 [INFO][4908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.772 [WARNING][4908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" HandleID="k8s-pod-network.8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.772 [INFO][4908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" HandleID="k8s-pod-network.8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.776 [INFO][4908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:59.784957 containerd[1551]: 2025-07-07 00:02:59.780 [INFO][4884] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:02:59.787352 containerd[1551]: time="2025-07-07T00:02:59.785320243Z" level=info msg="TearDown network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\" successfully" Jul 7 00:02:59.787352 containerd[1551]: time="2025-07-07T00:02:59.785338219Z" level=info msg="StopPodSandbox for \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\" returns successfully" Jul 7 00:02:59.786865 systemd[1]: run-netns-cni\x2d6c8dbaeb\x2dcffc\x2d8f0a\x2d329d\x2d04f4f08206ed.mount: Deactivated successfully. Jul 7 00:02:59.836277 containerd[1551]: time="2025-07-07T00:02:59.789160412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m5kcn,Uid:14bb0f88-ca72-41a6-bf6b-278ec258254c,Namespace:calico-system,Attempt:1,}" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.756 [INFO][4880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.756 [INFO][4880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" iface="eth0" netns="/var/run/netns/cni-a5972f1e-83a0-b7d4-a4f2-04595a1c25e9" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.757 [INFO][4880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" iface="eth0" netns="/var/run/netns/cni-a5972f1e-83a0-b7d4-a4f2-04595a1c25e9" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.759 [INFO][4880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" iface="eth0" netns="/var/run/netns/cni-a5972f1e-83a0-b7d4-a4f2-04595a1c25e9" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.759 [INFO][4880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.759 [INFO][4880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.777 [INFO][4915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" HandleID="k8s-pod-network.df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.777 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.777 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.829 [WARNING][4915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" HandleID="k8s-pod-network.df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.829 [INFO][4915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" HandleID="k8s-pod-network.df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.832 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:59.836277 containerd[1551]: 2025-07-07 00:02:59.834 [INFO][4880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:02:59.875977 containerd[1551]: time="2025-07-07T00:02:59.836467319Z" level=info msg="TearDown network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\" successfully" Jul 7 00:02:59.875977 containerd[1551]: time="2025-07-07T00:02:59.836482491Z" level=info msg="StopPodSandbox for \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\" returns successfully" Jul 7 00:02:59.875977 containerd[1551]: time="2025-07-07T00:02:59.838468539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c697bf8b4-g6224,Uid:4d25b825-a10e-406a-bd66-7d8888f8d3a4,Namespace:calico-system,Attempt:1,}" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.775 [INFO][4879] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.776 [INFO][4879] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" iface="eth0" netns="/var/run/netns/cni-1bf21f8c-bce4-289b-1771-19c4cf856076" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.777 [INFO][4879] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" iface="eth0" netns="/var/run/netns/cni-1bf21f8c-bce4-289b-1771-19c4cf856076" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.778 [INFO][4879] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" iface="eth0" netns="/var/run/netns/cni-1bf21f8c-bce4-289b-1771-19c4cf856076" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.779 [INFO][4879] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.779 [INFO][4879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.842 [INFO][4922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" HandleID="k8s-pod-network.812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.842 [INFO][4922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.842 [INFO][4922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.849 [WARNING][4922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" HandleID="k8s-pod-network.812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.849 [INFO][4922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" HandleID="k8s-pod-network.812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.850 [INFO][4922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:59.875977 containerd[1551]: 2025-07-07 00:02:59.853 [INFO][4879] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:02:59.875977 containerd[1551]: time="2025-07-07T00:02:59.856016356Z" level=info msg="TearDown network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\" successfully" Jul 7 00:02:59.875977 containerd[1551]: time="2025-07-07T00:02:59.856059522Z" level=info msg="StopPodSandbox for \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\" returns successfully" Jul 7 00:02:59.875977 containerd[1551]: time="2025-07-07T00:02:59.856525389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26jsb,Uid:7ea19b78-cbc5-4bff-999a-89047d422683,Namespace:calico-system,Attempt:1,}" Jul 7 00:02:59.838278 systemd[1]: run-netns-cni\x2da5972f1e\x2d83a0\x2db7d4\x2da4f2\x2d04595a1c25e9.mount: Deactivated successfully. Jul 7 00:02:59.857433 systemd[1]: run-netns-cni\x2d1bf21f8c\x2dbce4\x2d289b\x2d1771\x2d19c4cf856076.mount: Deactivated successfully. Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.826 [INFO][4891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.826 [INFO][4891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" iface="eth0" netns="/var/run/netns/cni-2e4697dd-1f9a-e137-6e64-da1b9f918185" Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.827 [INFO][4891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" iface="eth0" netns="/var/run/netns/cni-2e4697dd-1f9a-e137-6e64-da1b9f918185" Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.829 [INFO][4891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" iface="eth0" netns="/var/run/netns/cni-2e4697dd-1f9a-e137-6e64-da1b9f918185" Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.829 [INFO][4891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.829 [INFO][4891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.863 [INFO][4928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" HandleID="k8s-pod-network.8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.863 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.863 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.885 [WARNING][4928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" HandleID="k8s-pod-network.8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.885 [INFO][4928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" HandleID="k8s-pod-network.8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.886 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:02:59.890043 containerd[1551]: 2025-07-07 00:02:59.888 [INFO][4891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:02:59.891724 containerd[1551]: time="2025-07-07T00:02:59.890624826Z" level=info msg="TearDown network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\" successfully" Jul 7 00:02:59.891724 containerd[1551]: time="2025-07-07T00:02:59.890644274Z" level=info msg="StopPodSandbox for \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\" returns successfully" Jul 7 00:02:59.892287 containerd[1551]: time="2025-07-07T00:02:59.892033236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb89db9fc-bnjpd,Uid:390c0a19-48f4-4797-8196-5ec15c21cefb,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:02:59.893460 systemd[1]: run-netns-cni\x2d2e4697dd\x2d1f9a\x2de137\x2d6e64\x2dda1b9f918185.mount: Deactivated successfully. Jul 7 00:02:59.956438 systemd-networkd[1462]: cali2c9b2879a11: Gained IPv6LL Jul 7 00:03:00.100203 systemd-networkd[1462]: cali53f11bdaa29: Link UP Jul 7 00:03:00.101516 systemd-networkd[1462]: cali53f11bdaa29: Gained carrier Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:02:59.960 [INFO][4939] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:02:59.977 [INFO][4939] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--26jsb-eth0 csi-node-driver- calico-system 7ea19b78-cbc5-4bff-999a-89047d422683 961 0 2025-07-07 00:02:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-26jsb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali53f11bdaa29 [] [] }} ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Namespace="calico-system" Pod="csi-node-driver-26jsb" WorkloadEndpoint="localhost-k8s-csi--node--driver--26jsb-" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:02:59.977 [INFO][4939] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Namespace="calico-system" Pod="csi-node-driver-26jsb" WorkloadEndpoint="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.023 [INFO][4981] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" HandleID="k8s-pod-network.97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.024 [INFO][4981] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" HandleID="k8s-pod-network.97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-26jsb", "timestamp":"2025-07-07 00:03:00.023506698 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.024 [INFO][4981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.024 [INFO][4981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.024 [INFO][4981] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.032 [INFO][4981] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" host="localhost" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.041 [INFO][4981] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.049 [INFO][4981] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.051 [INFO][4981] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.059 [INFO][4981] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.059 [INFO][4981] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" host="localhost" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.061 [INFO][4981] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55 Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.075 [INFO][4981] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" host="localhost" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.085 [INFO][4981] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" host="localhost" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.086 [INFO][4981] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" host="localhost" Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.086 [INFO][4981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:00.117734 containerd[1551]: 2025-07-07 00:03:00.086 [INFO][4981] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" HandleID="k8s-pod-network.97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:00.120607 containerd[1551]: 2025-07-07 00:03:00.091 [INFO][4939] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Namespace="calico-system" Pod="csi-node-driver-26jsb" WorkloadEndpoint="localhost-k8s-csi--node--driver--26jsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--26jsb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ea19b78-cbc5-4bff-999a-89047d422683", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-26jsb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53f11bdaa29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:00.120607 containerd[1551]: 2025-07-07 00:03:00.092 [INFO][4939] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Namespace="calico-system" Pod="csi-node-driver-26jsb" WorkloadEndpoint="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:00.120607 containerd[1551]: 2025-07-07 00:03:00.093 [INFO][4939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53f11bdaa29 ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Namespace="calico-system" Pod="csi-node-driver-26jsb" WorkloadEndpoint="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:00.120607 containerd[1551]: 2025-07-07 00:03:00.102 [INFO][4939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Namespace="calico-system" Pod="csi-node-driver-26jsb" WorkloadEndpoint="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:00.120607 containerd[1551]: 2025-07-07 00:03:00.102 [INFO][4939] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Namespace="calico-system" Pod="csi-node-driver-26jsb" WorkloadEndpoint="localhost-k8s-csi--node--driver--26jsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--26jsb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ea19b78-cbc5-4bff-999a-89047d422683", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55", Pod:"csi-node-driver-26jsb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53f11bdaa29", MAC:"3a:b5:a7:25:9c:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:00.120607 containerd[1551]: 2025-07-07 00:03:00.113 [INFO][4939] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55" Namespace="calico-system" Pod="csi-node-driver-26jsb" WorkloadEndpoint="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:00.200421 containerd[1551]: time="2025-07-07T00:03:00.199800823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:03:00.200421 containerd[1551]: time="2025-07-07T00:03:00.200026081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:03:00.200421 containerd[1551]: time="2025-07-07T00:03:00.200035349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:03:00.201714 containerd[1551]: time="2025-07-07T00:03:00.201123568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:03:00.236289 systemd-networkd[1462]: cali3531d81f04b: Link UP Jul 7 00:03:00.238953 systemd-networkd[1462]: cali3531d81f04b: Gained carrier Jul 7 00:03:00.252545 systemd[1]: Started cri-containerd-97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55.scope - libcontainer container 97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55. Jul 7 00:03:00.258753 kubelet[2751]: I0707 00:03:00.258584 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:02:59.997 [INFO][4961] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.019 [INFO][4961] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0 calico-apiserver-7cb89db9fc- calico-apiserver 390c0a19-48f4-4797-8196-5ec15c21cefb 962 0 2025-07-07 00:02:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cb89db9fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cb89db9fc-bnjpd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3531d81f04b [] [] }} ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-bnjpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.019 [INFO][4961] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-bnjpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.084 [INFO][4994] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" HandleID="k8s-pod-network.95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.085 [INFO][4994] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" HandleID="k8s-pod-network.95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cb89db9fc-bnjpd", "timestamp":"2025-07-07 00:03:00.084351965 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.085 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.086 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.086 [INFO][4994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.135 [INFO][4994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" host="localhost" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.156 [INFO][4994] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.169 [INFO][4994] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.173 [INFO][4994] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.177 [INFO][4994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.177 [INFO][4994] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" host="localhost" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.179 [INFO][4994] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282 Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.188 [INFO][4994] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" host="localhost" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.214 [INFO][4994] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" host="localhost" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.214 [INFO][4994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" host="localhost" Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.214 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:00.271641 containerd[1551]: 2025-07-07 00:03:00.215 [INFO][4994] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" HandleID="k8s-pod-network.95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:00.272300 containerd[1551]: 2025-07-07 00:03:00.224 [INFO][4961] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-bnjpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0", GenerateName:"calico-apiserver-7cb89db9fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"390c0a19-48f4-4797-8196-5ec15c21cefb", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb89db9fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cb89db9fc-bnjpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3531d81f04b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:00.272300 containerd[1551]: 2025-07-07 00:03:00.224 [INFO][4961] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-bnjpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:00.272300 containerd[1551]: 2025-07-07 00:03:00.224 [INFO][4961] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3531d81f04b ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-bnjpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:00.272300 containerd[1551]: 2025-07-07 00:03:00.239 [INFO][4961] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-bnjpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:00.272300 containerd[1551]: 2025-07-07 00:03:00.240 [INFO][4961] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-bnjpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0", GenerateName:"calico-apiserver-7cb89db9fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"390c0a19-48f4-4797-8196-5ec15c21cefb", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb89db9fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282", Pod:"calico-apiserver-7cb89db9fc-bnjpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3531d81f04b", MAC:"ea:3d:62:52:85:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:00.272300 containerd[1551]: 2025-07-07 00:03:00.269 [INFO][4961] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282" Namespace="calico-apiserver" Pod="calico-apiserver-7cb89db9fc-bnjpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:00.277713 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:03:00.332232 systemd-networkd[1462]: cali5ef5609b369: Link UP Jul 7 00:03:00.335322 systemd-networkd[1462]: cali5ef5609b369: Gained carrier Jul 7 00:03:00.337830 containerd[1551]: time="2025-07-07T00:03:00.337798777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26jsb,Uid:7ea19b78-cbc5-4bff-999a-89047d422683,Namespace:calico-system,Attempt:1,} returns sandbox id \"97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55\"" Jul 7 00:03:00.341263 containerd[1551]: time="2025-07-07T00:03:00.341187230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:03:00.341369 containerd[1551]: time="2025-07-07T00:03:00.341223192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:03:00.341799 containerd[1551]: time="2025-07-07T00:03:00.341753945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:03:00.341952 containerd[1551]: time="2025-07-07T00:03:00.341937326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:03:00.370428 systemd[1]: Started cri-containerd-95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282.scope - libcontainer container 95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282. Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.000 [INFO][4947] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.036 [INFO][4947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0 calico-kube-controllers-6c697bf8b4- calico-system 4d25b825-a10e-406a-bd66-7d8888f8d3a4 960 0 2025-07-07 00:02:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c697bf8b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6c697bf8b4-g6224 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5ef5609b369 [] [] }} ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Namespace="calico-system" Pod="calico-kube-controllers-6c697bf8b4-g6224" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.037 [INFO][4947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Namespace="calico-system" Pod="calico-kube-controllers-6c697bf8b4-g6224" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.142 [INFO][5005] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" HandleID="k8s-pod-network.2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.142 [INFO][5005] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" HandleID="k8s-pod-network.2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6c697bf8b4-g6224", "timestamp":"2025-07-07 00:03:00.142022251 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.142 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.215 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.216 [INFO][5005] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.241 [INFO][5005] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" host="localhost" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.266 [INFO][5005] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.274 [INFO][5005] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.277 [INFO][5005] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.282 [INFO][5005] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.282 [INFO][5005] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" host="localhost" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.285 [INFO][5005] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29 Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.294 [INFO][5005] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" host="localhost" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.306 [INFO][5005] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" host="localhost" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.306 [INFO][5005] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" host="localhost" Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.306 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:00.371546 containerd[1551]: 2025-07-07 00:03:00.306 [INFO][5005] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" HandleID="k8s-pod-network.2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:00.374242 containerd[1551]: 2025-07-07 00:03:00.320 [INFO][4947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Namespace="calico-system" Pod="calico-kube-controllers-6c697bf8b4-g6224" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0", GenerateName:"calico-kube-controllers-6c697bf8b4-", Namespace:"calico-system", SelfLink:"", UID:"4d25b825-a10e-406a-bd66-7d8888f8d3a4", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c697bf8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6c697bf8b4-g6224", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ef5609b369", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:00.374242 containerd[1551]: 2025-07-07 00:03:00.321 [INFO][4947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Namespace="calico-system" Pod="calico-kube-controllers-6c697bf8b4-g6224" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:00.374242 containerd[1551]: 2025-07-07 00:03:00.321 [INFO][4947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ef5609b369 ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Namespace="calico-system" Pod="calico-kube-controllers-6c697bf8b4-g6224" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:00.374242 containerd[1551]: 2025-07-07 00:03:00.337 [INFO][4947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Namespace="calico-system" Pod="calico-kube-controllers-6c697bf8b4-g6224" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:00.374242 containerd[1551]: 2025-07-07 00:03:00.339 [INFO][4947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Namespace="calico-system" Pod="calico-kube-controllers-6c697bf8b4-g6224" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0", GenerateName:"calico-kube-controllers-6c697bf8b4-", Namespace:"calico-system", SelfLink:"", UID:"4d25b825-a10e-406a-bd66-7d8888f8d3a4", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c697bf8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29", Pod:"calico-kube-controllers-6c697bf8b4-g6224", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ef5609b369", MAC:"b6:ea:79:73:2e:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:00.374242 containerd[1551]: 2025-07-07 00:03:00.359 [INFO][4947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29" Namespace="calico-system" Pod="calico-kube-controllers-6c697bf8b4-g6224" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:00.421429 containerd[1551]: time="2025-07-07T00:03:00.420496185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:03:00.421429 containerd[1551]: time="2025-07-07T00:03:00.421238059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:03:00.421429 containerd[1551]: time="2025-07-07T00:03:00.421274397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:03:00.421720 containerd[1551]: time="2025-07-07T00:03:00.421612924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:03:00.434406 systemd-networkd[1462]: cali2213ff95c59: Link UP Jul 7 00:03:00.435931 systemd-networkd[1462]: cali2213ff95c59: Gained carrier Jul 7 00:03:00.437660 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:03:00.452527 systemd[1]: Started cri-containerd-2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29.scope - libcontainer container 2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29. Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.008 [INFO][4959] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.031 [INFO][4959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0 goldmane-768f4c5c69- calico-system 14bb0f88-ca72-41a6-bf6b-278ec258254c 959 0 2025-07-07 00:02:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-m5kcn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2213ff95c59 [] [] }} ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Namespace="calico-system" Pod="goldmane-768f4c5c69-m5kcn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m5kcn-" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.032 [INFO][4959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Namespace="calico-system" Pod="goldmane-768f4c5c69-m5kcn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.156 [INFO][4999] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" HandleID="k8s-pod-network.feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.156 [INFO][4999] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" HandleID="k8s-pod-network.feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004faf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-m5kcn", "timestamp":"2025-07-07 00:03:00.156292137 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.156 [INFO][4999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.306 [INFO][4999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.306 [INFO][4999] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.350 [INFO][4999] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" host="localhost" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.377 [INFO][4999] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.385 [INFO][4999] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.389 [INFO][4999] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.399 [INFO][4999] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.399 [INFO][4999] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" host="localhost" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.403 [INFO][4999] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.409 [INFO][4999] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" host="localhost" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.419 [INFO][4999] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" host="localhost" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.419 [INFO][4999] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" host="localhost" Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.420 [INFO][4999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:00.458074 containerd[1551]: 2025-07-07 00:03:00.420 [INFO][4999] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" HandleID="k8s-pod-network.feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:00.459275 containerd[1551]: 2025-07-07 00:03:00.426 [INFO][4959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Namespace="calico-system" Pod="goldmane-768f4c5c69-m5kcn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"14bb0f88-ca72-41a6-bf6b-278ec258254c", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-m5kcn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2213ff95c59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:00.459275 containerd[1551]: 2025-07-07 00:03:00.427 [INFO][4959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Namespace="calico-system" Pod="goldmane-768f4c5c69-m5kcn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:00.459275 containerd[1551]: 2025-07-07 00:03:00.427 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2213ff95c59 ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Namespace="calico-system" Pod="goldmane-768f4c5c69-m5kcn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:00.459275 containerd[1551]: 2025-07-07 00:03:00.437 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Namespace="calico-system" Pod="goldmane-768f4c5c69-m5kcn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:00.459275 containerd[1551]: 2025-07-07 00:03:00.437 [INFO][4959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Namespace="calico-system" Pod="goldmane-768f4c5c69-m5kcn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"14bb0f88-ca72-41a6-bf6b-278ec258254c", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd", Pod:"goldmane-768f4c5c69-m5kcn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2213ff95c59", MAC:"ee:ab:6a:a1:db:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:00.459275 containerd[1551]: 2025-07-07 00:03:00.450 [INFO][4959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd" Namespace="calico-system" Pod="goldmane-768f4c5c69-m5kcn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:00.489934 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:03:00.505703 containerd[1551]: time="2025-07-07T00:03:00.505071721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb89db9fc-bnjpd,Uid:390c0a19-48f4-4797-8196-5ec15c21cefb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282\"" Jul 7 00:03:00.513764 containerd[1551]: time="2025-07-07T00:03:00.510295855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:03:00.513764 containerd[1551]: time="2025-07-07T00:03:00.510333011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:03:00.513764 containerd[1551]: time="2025-07-07T00:03:00.510343255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:03:00.513764 containerd[1551]: time="2025-07-07T00:03:00.510391207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:03:00.537736 systemd[1]: Started cri-containerd-feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd.scope - libcontainer container feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd. Jul 7 00:03:00.551344 containerd[1551]: time="2025-07-07T00:03:00.551323408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c697bf8b4-g6224,Uid:4d25b825-a10e-406a-bd66-7d8888f8d3a4,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29\"" Jul 7 00:03:00.552795 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:03:00.578532 containerd[1551]: time="2025-07-07T00:03:00.578430373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m5kcn,Uid:14bb0f88-ca72-41a6-bf6b-278ec258254c,Namespace:calico-system,Attempt:1,} returns sandbox id \"feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd\"" Jul 7 00:03:01.051141 containerd[1551]: time="2025-07-07T00:03:01.050876255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:01.052166 containerd[1551]: time="2025-07-07T00:03:01.052112153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:03:01.053152 containerd[1551]: time="2025-07-07T00:03:01.053068210Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:01.054342 containerd[1551]: time="2025-07-07T00:03:01.054307581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:01.054987 containerd[1551]: time="2025-07-07T00:03:01.054868479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.044381769s" Jul 7 00:03:01.054987 containerd[1551]: time="2025-07-07T00:03:01.054890087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:03:01.065711 containerd[1551]: time="2025-07-07T00:03:01.064999703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:03:01.068799 containerd[1551]: time="2025-07-07T00:03:01.068770586Z" level=info msg="CreateContainer within sandbox \"9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:03:01.109902 containerd[1551]: time="2025-07-07T00:03:01.109811169Z" level=info msg="CreateContainer within sandbox \"9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86f77323bbde75000b403a14cda845796cdc118eb38faf4a1580967829dcb2f6\"" Jul 7 00:03:01.110606 containerd[1551]: time="2025-07-07T00:03:01.110333956Z" level=info msg="StartContainer for \"86f77323bbde75000b403a14cda845796cdc118eb38faf4a1580967829dcb2f6\"" Jul 7 00:03:01.135211 systemd[1]: Started cri-containerd-86f77323bbde75000b403a14cda845796cdc118eb38faf4a1580967829dcb2f6.scope - libcontainer container 86f77323bbde75000b403a14cda845796cdc118eb38faf4a1580967829dcb2f6. Jul 7 00:03:01.164134 kernel: bpftool[5293]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 00:03:01.179984 containerd[1551]: time="2025-07-07T00:03:01.179956758Z" level=info msg="StartContainer for \"86f77323bbde75000b403a14cda845796cdc118eb38faf4a1580967829dcb2f6\" returns successfully" Jul 7 00:03:01.306294 kubelet[2751]: I0707 00:03:01.306201 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cb89db9fc-b2bvr" podStartSLOduration=30.25072448 podStartE2EDuration="33.306189521s" podCreationTimestamp="2025-07-07 00:02:28 +0000 UTC" firstStartedPulling="2025-07-07 00:02:58.00934754 +0000 UTC m=+43.582775546" lastFinishedPulling="2025-07-07 00:03:01.064812578 +0000 UTC m=+46.638240587" observedRunningTime="2025-07-07 00:03:01.305973775 +0000 UTC m=+46.879401794" watchObservedRunningTime="2025-07-07 00:03:01.306189521 +0000 UTC m=+46.879617534" Jul 7 00:03:01.428265 systemd-networkd[1462]: cali5ef5609b369: Gained IPv6LL Jul 7 00:03:01.428453 systemd-networkd[1462]: cali53f11bdaa29: Gained IPv6LL Jul 7 00:03:01.458049 systemd-networkd[1462]: vxlan.calico: Link UP Jul 7 00:03:01.458054 systemd-networkd[1462]: vxlan.calico: Gained carrier Jul 7 00:03:01.620365 systemd-networkd[1462]: cali3531d81f04b: Gained IPv6LL Jul 7 00:03:02.068528 systemd-networkd[1462]: cali2213ff95c59: Gained IPv6LL Jul 7 00:03:03.476239 systemd-networkd[1462]: vxlan.calico: Gained IPv6LL Jul 7 00:03:03.894050 containerd[1551]: time="2025-07-07T00:03:03.893997383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:03.901941 containerd[1551]: time="2025-07-07T00:03:03.899808987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:03:03.927563 containerd[1551]: time="2025-07-07T00:03:03.927500850Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:03.949686 containerd[1551]: time="2025-07-07T00:03:03.949657647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:03.950421 containerd[1551]: time="2025-07-07T00:03:03.950086082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.88506326s" Jul 7 00:03:03.950421 containerd[1551]: time="2025-07-07T00:03:03.950106886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:03:03.960151 containerd[1551]: time="2025-07-07T00:03:03.959992948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:03:04.052255 containerd[1551]: time="2025-07-07T00:03:04.052225341Z" level=info msg="CreateContainer within sandbox \"97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:03:04.084496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432191878.mount: Deactivated successfully. Jul 7 00:03:04.092616 containerd[1551]: time="2025-07-07T00:03:04.092590301Z" level=info msg="CreateContainer within sandbox \"97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c42354ac215e7fbe2def131e592653daeb784f7a83d0c825eea866a3650a8429\"" Jul 7 00:03:04.100477 containerd[1551]: time="2025-07-07T00:03:04.100443136Z" level=info msg="StartContainer for \"c42354ac215e7fbe2def131e592653daeb784f7a83d0c825eea866a3650a8429\"" Jul 7 00:03:04.247476 systemd[1]: Started cri-containerd-c42354ac215e7fbe2def131e592653daeb784f7a83d0c825eea866a3650a8429.scope - libcontainer container c42354ac215e7fbe2def131e592653daeb784f7a83d0c825eea866a3650a8429. Jul 7 00:03:04.267451 containerd[1551]: time="2025-07-07T00:03:04.267427396Z" level=info msg="StartContainer for \"c42354ac215e7fbe2def131e592653daeb784f7a83d0c825eea866a3650a8429\" returns successfully" Jul 7 00:03:04.317757 containerd[1551]: time="2025-07-07T00:03:04.317734971Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:04.318283 containerd[1551]: time="2025-07-07T00:03:04.318021274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:03:04.319952 containerd[1551]: time="2025-07-07T00:03:04.319619650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 359.609981ms" Jul 7 00:03:04.319952 containerd[1551]: time="2025-07-07T00:03:04.319637624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:03:04.370278 containerd[1551]: time="2025-07-07T00:03:04.370106886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:03:04.376587 containerd[1551]: time="2025-07-07T00:03:04.376532817Z" level=info msg="CreateContainer within sandbox \"95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:03:04.392818 containerd[1551]: time="2025-07-07T00:03:04.392775584Z" level=info msg="CreateContainer within sandbox \"95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ceb346a5750ba0df4d6303192d05a305806372da04f81dc582eb2c916f07ce33\"" Jul 7 00:03:04.393284 containerd[1551]: time="2025-07-07T00:03:04.393033759Z" level=info msg="StartContainer for \"ceb346a5750ba0df4d6303192d05a305806372da04f81dc582eb2c916f07ce33\"" Jul 7 00:03:04.413215 systemd[1]: Started cri-containerd-ceb346a5750ba0df4d6303192d05a305806372da04f81dc582eb2c916f07ce33.scope - libcontainer container ceb346a5750ba0df4d6303192d05a305806372da04f81dc582eb2c916f07ce33. Jul 7 00:03:04.442018 containerd[1551]: time="2025-07-07T00:03:04.441994310Z" level=info msg="StartContainer for \"ceb346a5750ba0df4d6303192d05a305806372da04f81dc582eb2c916f07ce33\" returns successfully" Jul 7 00:03:05.082513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229293566.mount: Deactivated successfully. Jul 7 00:03:05.478054 kubelet[2751]: I0707 00:03:05.468073 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cb89db9fc-bnjpd" podStartSLOduration=33.605053424 podStartE2EDuration="37.457539972s" podCreationTimestamp="2025-07-07 00:02:28 +0000 UTC" firstStartedPulling="2025-07-07 00:03:00.517485953 +0000 UTC m=+46.090913959" lastFinishedPulling="2025-07-07 00:03:04.369972498 +0000 UTC m=+49.943400507" observedRunningTime="2025-07-07 00:03:05.45314341 +0000 UTC m=+51.026571439" watchObservedRunningTime="2025-07-07 00:03:05.457539972 +0000 UTC m=+51.030967987" Jul 7 00:03:08.232156 kubelet[2751]: E0707 00:03:08.231508 2751 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.119s" Jul 7 00:03:08.503418 containerd[1551]: time="2025-07-07T00:03:08.503316692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:08.603649 containerd[1551]: time="2025-07-07T00:03:08.583454479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:03:08.613760 containerd[1551]: time="2025-07-07T00:03:08.585616490Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:08.629928 containerd[1551]: time="2025-07-07T00:03:08.629847343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:08.630323 containerd[1551]: time="2025-07-07T00:03:08.630306059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.260148999s" Jul 7 00:03:08.630387 containerd[1551]: time="2025-07-07T00:03:08.630330050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:03:08.951834 containerd[1551]: time="2025-07-07T00:03:08.951730065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:03:09.396541 containerd[1551]: time="2025-07-07T00:03:09.396480566Z" level=info msg="CreateContainer within sandbox \"2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:03:09.639551 containerd[1551]: time="2025-07-07T00:03:09.639490711Z" level=info msg="CreateContainer within sandbox \"2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"75eb758da1aef5746ba035a8f66927d8e13dbd80d430db3b24ebb796a38ed27d\"" Jul 7 00:03:09.661467 containerd[1551]: time="2025-07-07T00:03:09.661386246Z" level=info msg="StartContainer for \"75eb758da1aef5746ba035a8f66927d8e13dbd80d430db3b24ebb796a38ed27d\"" Jul 7 00:03:09.941223 systemd[1]: Started cri-containerd-75eb758da1aef5746ba035a8f66927d8e13dbd80d430db3b24ebb796a38ed27d.scope - libcontainer container 75eb758da1aef5746ba035a8f66927d8e13dbd80d430db3b24ebb796a38ed27d. Jul 7 00:03:09.996252 containerd[1551]: time="2025-07-07T00:03:09.996225365Z" level=info msg="StartContainer for \"75eb758da1aef5746ba035a8f66927d8e13dbd80d430db3b24ebb796a38ed27d\" returns successfully" Jul 7 00:03:10.634494 systemd[1]: run-containerd-runc-k8s.io-75eb758da1aef5746ba035a8f66927d8e13dbd80d430db3b24ebb796a38ed27d-runc.LNuGBk.mount: Deactivated successfully. Jul 7 00:03:10.905480 kubelet[2751]: I0707 00:03:10.889499 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c697bf8b4-g6224" podStartSLOduration=31.553676588 podStartE2EDuration="39.825775053s" podCreationTimestamp="2025-07-07 00:02:31 +0000 UTC" firstStartedPulling="2025-07-07 00:03:00.552715343 +0000 UTC m=+46.126143349" lastFinishedPulling="2025-07-07 00:03:08.8248138 +0000 UTC m=+54.398241814" observedRunningTime="2025-07-07 00:03:10.532209738 +0000 UTC m=+56.105637748" watchObservedRunningTime="2025-07-07 00:03:10.825775053 +0000 UTC m=+56.399203066" Jul 7 00:03:12.945773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959785786.mount: Deactivated successfully. Jul 7 00:03:15.401106 containerd[1551]: time="2025-07-07T00:03:15.401056499Z" level=info msg="StopPodSandbox for \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\"" Jul 7 00:03:16.550581 containerd[1551]: time="2025-07-07T00:03:16.550401992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:16.589763 containerd[1551]: time="2025-07-07T00:03:16.589724534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:03:16.632855 containerd[1551]: time="2025-07-07T00:03:16.632800268Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:16.679151 containerd[1551]: time="2025-07-07T00:03:16.679045441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:16.689059 containerd[1551]: time="2025-07-07T00:03:16.688910115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 7.732231757s" Jul 7 00:03:16.689059 containerd[1551]: time="2025-07-07T00:03:16.688958282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:03:17.008238 containerd[1551]: time="2025-07-07T00:03:17.008145302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:03:17.246256 containerd[1551]: time="2025-07-07T00:03:17.246161732Z" level=info msg="CreateContainer within sandbox \"feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:03:17.257082 containerd[1551]: time="2025-07-07T00:03:17.257052920Z" level=info msg="CreateContainer within sandbox \"feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d378c649593d69a8b3d66f9ce921130e9580201c79b718c4d99a60b2c7bf2ce3\"" Jul 7 00:03:17.257704 containerd[1551]: time="2025-07-07T00:03:17.257556333Z" level=info msg="StartContainer for \"d378c649593d69a8b3d66f9ce921130e9580201c79b718c4d99a60b2c7bf2ce3\"" Jul 7 00:03:17.397234 systemd[1]: Started cri-containerd-d378c649593d69a8b3d66f9ce921130e9580201c79b718c4d99a60b2c7bf2ce3.scope - libcontainer container d378c649593d69a8b3d66f9ce921130e9580201c79b718c4d99a60b2c7bf2ce3. Jul 7 00:03:17.444339 containerd[1551]: time="2025-07-07T00:03:17.444300766Z" level=info msg="StartContainer for \"d378c649593d69a8b3d66f9ce921130e9580201c79b718c4d99a60b2c7bf2ce3\" returns successfully" Jul 7 00:03:18.431343 kubelet[2751]: I0707 00:03:18.412649 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-m5kcn" podStartSLOduration=31.996628804 podStartE2EDuration="48.405396922s" podCreationTimestamp="2025-07-07 00:02:30 +0000 UTC" firstStartedPulling="2025-07-07 00:03:00.583227863 +0000 UTC m=+46.156655869" lastFinishedPulling="2025-07-07 00:03:16.991995969 +0000 UTC m=+62.565423987" observedRunningTime="2025-07-07 00:03:18.342388237 +0000 UTC m=+63.915816255" watchObservedRunningTime="2025-07-07 00:03:18.405396922 +0000 UTC m=+63.978824949" Jul 7 00:03:20.407896 containerd[1551]: time="2025-07-07T00:03:20.407808600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:20.438164 containerd[1551]: time="2025-07-07T00:03:20.416680029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:03:20.444197 containerd[1551]: time="2025-07-07T00:03:20.444159355Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:20.450297 containerd[1551]: time="2025-07-07T00:03:20.450274654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:20.460506 containerd[1551]: time="2025-07-07T00:03:20.450483822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.442309098s" Jul 7 00:03:20.460506 containerd[1551]: time="2025-07-07T00:03:20.450499549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:17.665 [WARNING][5621] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0", GenerateName:"calico-apiserver-7cb89db9fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1df52e7b-a2e0-4431-9a2a-1ab12c493fd5", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb89db9fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51", Pod:"calico-apiserver-7cb89db9fc-b2bvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b87182df7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:17.688 [INFO][5621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:17.688 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" iface="eth0" netns="" Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:17.688 [INFO][5621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:17.688 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:20.509 [INFO][5664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" HandleID="k8s-pod-network.56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:20.577 [INFO][5664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:20.583 [INFO][5664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:20.669 [WARNING][5664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" HandleID="k8s-pod-network.56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:20.669 [INFO][5664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" HandleID="k8s-pod-network.56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:20.671 [INFO][5664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:20.674751 containerd[1551]: 2025-07-07 00:03:20.673 [INFO][5621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:03:20.700254 containerd[1551]: time="2025-07-07T00:03:20.679358650Z" level=info msg="TearDown network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\" successfully" Jul 7 00:03:20.700254 containerd[1551]: time="2025-07-07T00:03:20.679378549Z" level=info msg="StopPodSandbox for \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\" returns successfully" Jul 7 00:03:20.938189 containerd[1551]: time="2025-07-07T00:03:20.938085372Z" level=info msg="CreateContainer within sandbox \"97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:03:21.231847 containerd[1551]: time="2025-07-07T00:03:21.231749689Z" level=info msg="CreateContainer within sandbox \"97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fa9670037be5ad1452ca8707c51d98c59df52f471c7ac2cfef0e74f06045f78d\"" Jul 7 00:03:21.266060 containerd[1551]: time="2025-07-07T00:03:21.265748248Z" level=info msg="StartContainer for \"fa9670037be5ad1452ca8707c51d98c59df52f471c7ac2cfef0e74f06045f78d\"" Jul 7 00:03:21.304660 containerd[1551]: time="2025-07-07T00:03:21.304522569Z" level=info msg="RemovePodSandbox for \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\"" Jul 7 00:03:21.321476 containerd[1551]: time="2025-07-07T00:03:21.320953037Z" level=info msg="Forcibly stopping sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\"" Jul 7 00:03:21.797354 systemd[1]: Started cri-containerd-fa9670037be5ad1452ca8707c51d98c59df52f471c7ac2cfef0e74f06045f78d.scope - libcontainer container fa9670037be5ad1452ca8707c51d98c59df52f471c7ac2cfef0e74f06045f78d. Jul 7 00:03:21.899691 containerd[1551]: time="2025-07-07T00:03:21.894697985Z" level=info msg="StartContainer for \"fa9670037be5ad1452ca8707c51d98c59df52f471c7ac2cfef0e74f06045f78d\" returns successfully" Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:22.280 [WARNING][5768] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0", GenerateName:"calico-apiserver-7cb89db9fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1df52e7b-a2e0-4431-9a2a-1ab12c493fd5", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb89db9fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e1cbf273097f833481307172021fa10058f104b37813d33077962212f28ab51", Pod:"calico-apiserver-7cb89db9fc-b2bvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b87182df7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:22.283 [INFO][5768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:22.284 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" iface="eth0" netns="" Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:22.284 [INFO][5768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:22.284 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:23.025 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" HandleID="k8s-pod-network.56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:23.029 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:23.030 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:23.040 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" HandleID="k8s-pod-network.56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:23.040 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" HandleID="k8s-pod-network.56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--b2bvr-eth0" Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:23.041 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:23.047058 containerd[1551]: 2025-07-07 00:03:23.043 [INFO][5768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74" Jul 7 00:03:23.047058 containerd[1551]: time="2025-07-07T00:03:23.047041530Z" level=info msg="TearDown network for sandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\" successfully" Jul 7 00:03:23.068963 containerd[1551]: time="2025-07-07T00:03:23.068933609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:03:23.077808 containerd[1551]: time="2025-07-07T00:03:23.077779299Z" level=info msg="RemovePodSandbox \"56f73a1625c51cd6871c99b381ee99c12af50816fad8b214f27878dbd91efa74\" returns successfully" Jul 7 00:03:23.095828 containerd[1551]: time="2025-07-07T00:03:23.095701463Z" level=info msg="StopPodSandbox for \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\"" Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.218 [WARNING][5841] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0", GenerateName:"calico-apiserver-7cb89db9fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"390c0a19-48f4-4797-8196-5ec15c21cefb", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb89db9fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282", Pod:"calico-apiserver-7cb89db9fc-bnjpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3531d81f04b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.219 [INFO][5841] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.219 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" iface="eth0" netns="" Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.219 [INFO][5841] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.219 [INFO][5841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.279 [INFO][5848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" HandleID="k8s-pod-network.8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.296 [INFO][5848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.296 [INFO][5848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.347 [WARNING][5848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" HandleID="k8s-pod-network.8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.347 [INFO][5848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" HandleID="k8s-pod-network.8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.348 [INFO][5848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:23.353509 containerd[1551]: 2025-07-07 00:03:23.350 [INFO][5841] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:03:23.353509 containerd[1551]: time="2025-07-07T00:03:23.353481420Z" level=info msg="TearDown network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\" successfully" Jul 7 00:03:23.407287 containerd[1551]: time="2025-07-07T00:03:23.353498302Z" level=info msg="StopPodSandbox for \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\" returns successfully" Jul 7 00:03:23.480747 containerd[1551]: time="2025-07-07T00:03:23.480507272Z" level=info msg="RemovePodSandbox for \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\"" Jul 7 00:03:23.480747 containerd[1551]: time="2025-07-07T00:03:23.480540259Z" level=info msg="Forcibly stopping sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\"" Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.558 [WARNING][5863] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0", GenerateName:"calico-apiserver-7cb89db9fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"390c0a19-48f4-4797-8196-5ec15c21cefb", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb89db9fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95a9aabfc4c25e7babd20f7b186e916fa74decaffa30e79dad2505f329469282", Pod:"calico-apiserver-7cb89db9fc-bnjpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3531d81f04b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.558 [INFO][5863] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.558 [INFO][5863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" iface="eth0" netns="" Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.558 [INFO][5863] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.558 [INFO][5863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.575 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" HandleID="k8s-pod-network.8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.575 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.575 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.586 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" HandleID="k8s-pod-network.8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.586 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" HandleID="k8s-pod-network.8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Workload="localhost-k8s-calico--apiserver--7cb89db9fc--bnjpd-eth0" Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.586 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:23.589680 containerd[1551]: 2025-07-07 00:03:23.588 [INFO][5863] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289" Jul 7 00:03:23.612865 containerd[1551]: time="2025-07-07T00:03:23.589697670Z" level=info msg="TearDown network for sandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\" successfully" Jul 7 00:03:23.670279 containerd[1551]: time="2025-07-07T00:03:23.670232425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:03:23.670367 containerd[1551]: time="2025-07-07T00:03:23.670303966Z" level=info msg="RemovePodSandbox \"8171c755ee0bd44002b9c71817821c71c05b7e6bccf387eb19e1bb128a0a3289\" returns successfully" Jul 7 00:03:23.708245 containerd[1551]: time="2025-07-07T00:03:23.708172818Z" level=info msg="StopPodSandbox for \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\"" Jul 7 00:03:23.847588 kubelet[2751]: I0707 00:03:23.482924 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-26jsb" podStartSLOduration=31.715746854 podStartE2EDuration="52.209162019s" podCreationTimestamp="2025-07-07 00:02:31 +0000 UTC" firstStartedPulling="2025-07-07 00:03:00.345887764 +0000 UTC m=+45.919315774" lastFinishedPulling="2025-07-07 00:03:20.839302929 +0000 UTC m=+66.412730939" observedRunningTime="2025-07-07 00:03:23.116635167 +0000 UTC m=+68.690063181" watchObservedRunningTime="2025-07-07 00:03:23.209162019 +0000 UTC m=+68.782590031" Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.831 [WARNING][5885] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"14bb0f88-ca72-41a6-bf6b-278ec258254c", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd", Pod:"goldmane-768f4c5c69-m5kcn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2213ff95c59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.831 [INFO][5885] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.831 [INFO][5885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" iface="eth0" netns="" Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.831 [INFO][5885] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.831 [INFO][5885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.934 [INFO][5892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" HandleID="k8s-pod-network.8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.934 [INFO][5892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.934 [INFO][5892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.938 [WARNING][5892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" HandleID="k8s-pod-network.8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.938 [INFO][5892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" HandleID="k8s-pod-network.8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.939 [INFO][5892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:23.942214 containerd[1551]: 2025-07-07 00:03:23.940 [INFO][5885] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:03:23.942943 containerd[1551]: time="2025-07-07T00:03:23.942927005Z" level=info msg="TearDown network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\" successfully" Jul 7 00:03:23.942989 containerd[1551]: time="2025-07-07T00:03:23.942980970Z" level=info msg="StopPodSandbox for \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\" returns successfully" Jul 7 00:03:24.076254 containerd[1551]: time="2025-07-07T00:03:24.076224252Z" level=info msg="RemovePodSandbox for \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\"" Jul 7 00:03:24.077388 containerd[1551]: time="2025-07-07T00:03:24.076686214Z" level=info msg="Forcibly stopping sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\"" Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.135 [WARNING][5906] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"14bb0f88-ca72-41a6-bf6b-278ec258254c", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feb4492a9c1face28644c709c1b9e733d08fdcd2f72bf9eb214de27b9856cdfd", Pod:"goldmane-768f4c5c69-m5kcn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2213ff95c59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.135 [INFO][5906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.135 [INFO][5906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" iface="eth0" netns="" Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.135 [INFO][5906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.135 [INFO][5906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.164 [INFO][5913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" HandleID="k8s-pod-network.8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.164 [INFO][5913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.164 [INFO][5913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.210 [WARNING][5913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" HandleID="k8s-pod-network.8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.210 [INFO][5913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" HandleID="k8s-pod-network.8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Workload="localhost-k8s-goldmane--768f4c5c69--m5kcn-eth0" Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.238 [INFO][5913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:24.241312 containerd[1551]: 2025-07-07 00:03:24.239 [INFO][5906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73" Jul 7 00:03:24.241312 containerd[1551]: time="2025-07-07T00:03:24.240945115Z" level=info msg="TearDown network for sandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\" successfully" Jul 7 00:03:24.441174 kubelet[2751]: I0707 00:03:24.439016 2751 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:03:24.446322 kubelet[2751]: I0707 00:03:24.446306 2751 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:03:24.487226 containerd[1551]: time="2025-07-07T00:03:24.487176164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:03:24.487379 containerd[1551]: time="2025-07-07T00:03:24.487231525Z" level=info msg="RemovePodSandbox \"8ca5859bda66255f4e4666ac3ab2717690b265a6a52daf9109aee4a9d0302f73\" returns successfully" Jul 7 00:03:24.487753 containerd[1551]: time="2025-07-07T00:03:24.487568912Z" level=info msg="StopPodSandbox for \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\"" Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.542 [WARNING][5927] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7938ba32-3622-4550-916a-e1e1fa111816", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c", Pod:"coredns-674b8bbfcf-ktt6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c9b2879a11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.542 [INFO][5927] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.542 [INFO][5927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" iface="eth0" netns="" Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.542 [INFO][5927] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.542 [INFO][5927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.562 [INFO][5934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" HandleID="k8s-pod-network.bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.563 [INFO][5934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.563 [INFO][5934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.567 [WARNING][5934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" HandleID="k8s-pod-network.bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.567 [INFO][5934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" HandleID="k8s-pod-network.bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.568 [INFO][5934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:24.571034 containerd[1551]: 2025-07-07 00:03:24.569 [INFO][5927] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:03:24.571034 containerd[1551]: time="2025-07-07T00:03:24.570870538Z" level=info msg="TearDown network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\" successfully" Jul 7 00:03:24.571034 containerd[1551]: time="2025-07-07T00:03:24.570887351Z" level=info msg="StopPodSandbox for \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\" returns successfully" Jul 7 00:03:24.577952 containerd[1551]: time="2025-07-07T00:03:24.571834058Z" level=info msg="RemovePodSandbox for \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\"" Jul 7 00:03:24.577952 containerd[1551]: time="2025-07-07T00:03:24.571851736Z" level=info msg="Forcibly stopping sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\"" Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.617 [WARNING][5948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7938ba32-3622-4550-916a-e1e1fa111816", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7067432b4c5be9b697f50e8479adcc4c3b86549683a35fd2943f75ea7ba710c", Pod:"coredns-674b8bbfcf-ktt6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c9b2879a11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.617 [INFO][5948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.617 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" iface="eth0" netns="" Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.617 [INFO][5948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.617 [INFO][5948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.639 [INFO][5956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" HandleID="k8s-pod-network.bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.639 [INFO][5956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.639 [INFO][5956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.643 [WARNING][5956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" HandleID="k8s-pod-network.bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.643 [INFO][5956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" HandleID="k8s-pod-network.bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Workload="localhost-k8s-coredns--674b8bbfcf--ktt6k-eth0" Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.644 [INFO][5956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:24.647234 containerd[1551]: 2025-07-07 00:03:24.646 [INFO][5948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111" Jul 7 00:03:24.650709 containerd[1551]: time="2025-07-07T00:03:24.647259486Z" level=info msg="TearDown network for sandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\" successfully" Jul 7 00:03:24.687419 containerd[1551]: time="2025-07-07T00:03:24.687388152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:03:24.687595 containerd[1551]: time="2025-07-07T00:03:24.687435495Z" level=info msg="RemovePodSandbox \"bafbcf7de9ede0a63cf9cae99b3389977d514347e5668b7d1b0705f933158111\" returns successfully" Jul 7 00:03:24.687879 containerd[1551]: time="2025-07-07T00:03:24.687867156Z" level=info msg="StopPodSandbox for \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\"" Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.714 [WARNING][5971] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b3c62c35-57e3-46c3-83b5-1109b949cad4", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b", Pod:"coredns-674b8bbfcf-6n2s9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb11e26d09e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.714 [INFO][5971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.714 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" iface="eth0" netns="" Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.714 [INFO][5971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.714 [INFO][5971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.731 [INFO][5979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" HandleID="k8s-pod-network.3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.731 [INFO][5979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.731 [INFO][5979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.736 [WARNING][5979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" HandleID="k8s-pod-network.3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.736 [INFO][5979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" HandleID="k8s-pod-network.3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.737 [INFO][5979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:24.741151 containerd[1551]: 2025-07-07 00:03:24.739 [INFO][5971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:03:24.742397 containerd[1551]: time="2025-07-07T00:03:24.741172729Z" level=info msg="TearDown network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\" successfully" Jul 7 00:03:24.742397 containerd[1551]: time="2025-07-07T00:03:24.741189336Z" level=info msg="StopPodSandbox for \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\" returns successfully" Jul 7 00:03:24.742397 containerd[1551]: time="2025-07-07T00:03:24.741478346Z" level=info msg="RemovePodSandbox for \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\"" Jul 7 00:03:24.742397 containerd[1551]: time="2025-07-07T00:03:24.741495313Z" level=info msg="Forcibly stopping sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\"" Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.809 [WARNING][5993] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b3c62c35-57e3-46c3-83b5-1109b949cad4", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"412780556f8366db826b7d1d24ea7b425ec556919319294c59d63ea24ee9526b", Pod:"coredns-674b8bbfcf-6n2s9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb11e26d09e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.810 [INFO][5993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.810 [INFO][5993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" iface="eth0" netns="" Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.810 [INFO][5993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.810 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.831 [INFO][6001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" HandleID="k8s-pod-network.3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.831 [INFO][6001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.831 [INFO][6001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.836 [WARNING][6001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" HandleID="k8s-pod-network.3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.836 [INFO][6001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" HandleID="k8s-pod-network.3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Workload="localhost-k8s-coredns--674b8bbfcf--6n2s9-eth0" Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.837 [INFO][6001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:24.840046 containerd[1551]: 2025-07-07 00:03:24.838 [INFO][5993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7" Jul 7 00:03:24.840046 containerd[1551]: time="2025-07-07T00:03:24.840024148Z" level=info msg="TearDown network for sandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\" successfully" Jul 7 00:03:24.843163 containerd[1551]: time="2025-07-07T00:03:24.843144227Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:03:24.843216 containerd[1551]: time="2025-07-07T00:03:24.843184945Z" level=info msg="RemovePodSandbox \"3a6a77b5129056b8489d2bf582529809a530dff1e9c4121600ba72d485d396e7\" returns successfully" Jul 7 00:03:24.846642 containerd[1551]: time="2025-07-07T00:03:24.846494534Z" level=info msg="StopPodSandbox for \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\"" Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.873 [WARNING][6015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--26jsb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ea19b78-cbc5-4bff-999a-89047d422683", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55", Pod:"csi-node-driver-26jsb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53f11bdaa29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.873 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.873 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" iface="eth0" netns="" Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.873 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.873 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.892 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" HandleID="k8s-pod-network.812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.892 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.892 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.896 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" HandleID="k8s-pod-network.812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.896 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" HandleID="k8s-pod-network.812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.897 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:24.900275 containerd[1551]: 2025-07-07 00:03:24.898 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:03:24.903223 containerd[1551]: time="2025-07-07T00:03:24.900298718Z" level=info msg="TearDown network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\" successfully" Jul 7 00:03:24.903223 containerd[1551]: time="2025-07-07T00:03:24.900326527Z" level=info msg="StopPodSandbox for \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\" returns successfully" Jul 7 00:03:24.903223 containerd[1551]: time="2025-07-07T00:03:24.900655539Z" level=info msg="RemovePodSandbox for \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\"" Jul 7 00:03:24.903223 containerd[1551]: time="2025-07-07T00:03:24.900670960Z" level=info msg="Forcibly stopping sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\"" Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.925 [WARNING][6036] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--26jsb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ea19b78-cbc5-4bff-999a-89047d422683", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97baecdfb17c84101e6b3f79f8ad7b91239765ccbda1f00ff4d728127b7c3a55", Pod:"csi-node-driver-26jsb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53f11bdaa29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.925 [INFO][6036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.925 [INFO][6036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" iface="eth0" netns="" Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.925 [INFO][6036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.925 [INFO][6036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.945 [INFO][6043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" HandleID="k8s-pod-network.812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.945 [INFO][6043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.945 [INFO][6043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.952 [WARNING][6043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" HandleID="k8s-pod-network.812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.952 [INFO][6043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" HandleID="k8s-pod-network.812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Workload="localhost-k8s-csi--node--driver--26jsb-eth0" Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.953 [INFO][6043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:24.957458 containerd[1551]: 2025-07-07 00:03:24.955 [INFO][6036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c" Jul 7 00:03:24.958554 containerd[1551]: time="2025-07-07T00:03:24.957863584Z" level=info msg="TearDown network for sandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\" successfully" Jul 7 00:03:24.960297 containerd[1551]: time="2025-07-07T00:03:24.960268941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:03:24.960364 containerd[1551]: time="2025-07-07T00:03:24.960316803Z" level=info msg="RemovePodSandbox \"812729c45ae5b254ad60970ee49a5d9d762cb6a4ce0b5df90bcbc79d0b4c460c\" returns successfully" Jul 7 00:03:24.960710 containerd[1551]: time="2025-07-07T00:03:24.960687942Z" level=info msg="StopPodSandbox for \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\"" Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:24.993 [WARNING][6057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0", GenerateName:"calico-kube-controllers-6c697bf8b4-", Namespace:"calico-system", SelfLink:"", UID:"4d25b825-a10e-406a-bd66-7d8888f8d3a4", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c697bf8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29", Pod:"calico-kube-controllers-6c697bf8b4-g6224", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ef5609b369", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:24.993 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:24.993 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" iface="eth0" netns="" Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:24.993 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:24.993 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:25.009 [INFO][6064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" HandleID="k8s-pod-network.df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:25.010 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:25.010 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:25.014 [WARNING][6064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" HandleID="k8s-pod-network.df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:25.014 [INFO][6064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" HandleID="k8s-pod-network.df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:25.014 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:25.018309 containerd[1551]: 2025-07-07 00:03:25.016 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:03:25.018309 containerd[1551]: time="2025-07-07T00:03:25.018232577Z" level=info msg="TearDown network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\" successfully" Jul 7 00:03:25.018309 containerd[1551]: time="2025-07-07T00:03:25.018247626Z" level=info msg="StopPodSandbox for \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\" returns successfully" Jul 7 00:03:25.019386 containerd[1551]: time="2025-07-07T00:03:25.018994344Z" level=info msg="RemovePodSandbox for \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\"" Jul 7 00:03:25.019386 containerd[1551]: time="2025-07-07T00:03:25.019012146Z" level=info msg="Forcibly stopping sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\"" Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.057 [WARNING][6078] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0", GenerateName:"calico-kube-controllers-6c697bf8b4-", Namespace:"calico-system", SelfLink:"", UID:"4d25b825-a10e-406a-bd66-7d8888f8d3a4", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c697bf8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e441111bbb8d35e8494e03ddd6d1d8150101663d8719efe0852a531bf49af29", Pod:"calico-kube-controllers-6c697bf8b4-g6224", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ef5609b369", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.057 [INFO][6078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.057 [INFO][6078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" iface="eth0" netns="" Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.057 [INFO][6078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.057 [INFO][6078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.073 [INFO][6086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" HandleID="k8s-pod-network.df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.074 [INFO][6086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.074 [INFO][6086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.077 [WARNING][6086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" HandleID="k8s-pod-network.df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.077 [INFO][6086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" HandleID="k8s-pod-network.df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Workload="localhost-k8s-calico--kube--controllers--6c697bf8b4--g6224-eth0" Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.078 [INFO][6086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:25.081011 containerd[1551]: 2025-07-07 00:03:25.079 [INFO][6078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9" Jul 7 00:03:25.082056 containerd[1551]: time="2025-07-07T00:03:25.081175224Z" level=info msg="TearDown network for sandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\" successfully" Jul 7 00:03:25.083547 containerd[1551]: time="2025-07-07T00:03:25.083533651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:03:25.083671 containerd[1551]: time="2025-07-07T00:03:25.083612025Z" level=info msg="RemovePodSandbox \"df6bf90dfbb0c056e4db8fe68964fa6dd79d438a20d1bb4a1bc5ddd54f6b0ba9\" returns successfully" Jul 7 00:03:25.083941 containerd[1551]: time="2025-07-07T00:03:25.083923349Z" level=info msg="StopPodSandbox for \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\"" Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.104 [WARNING][6117] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" WorkloadEndpoint="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.104 [INFO][6117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.104 [INFO][6117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" iface="eth0" netns="" Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.104 [INFO][6117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.104 [INFO][6117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.120 [INFO][6124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" HandleID="k8s-pod-network.d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Workload="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.120 [INFO][6124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.120 [INFO][6124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.123 [WARNING][6124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" HandleID="k8s-pod-network.d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Workload="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.124 [INFO][6124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" HandleID="k8s-pod-network.d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Workload="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.124 [INFO][6124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:25.127206 containerd[1551]: 2025-07-07 00:03:25.126 [INFO][6117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:03:25.127478 containerd[1551]: time="2025-07-07T00:03:25.127245783Z" level=info msg="TearDown network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\" successfully" Jul 7 00:03:25.127478 containerd[1551]: time="2025-07-07T00:03:25.127262332Z" level=info msg="StopPodSandbox for \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\" returns successfully" Jul 7 00:03:25.127586 containerd[1551]: time="2025-07-07T00:03:25.127571529Z" level=info msg="RemovePodSandbox for \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\"" Jul 7 00:03:25.127586 containerd[1551]: time="2025-07-07T00:03:25.127589732Z" level=info msg="Forcibly stopping sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\"" Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.148 [WARNING][6138] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" WorkloadEndpoint="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.148 [INFO][6138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.148 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" iface="eth0" netns="" Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.148 [INFO][6138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.148 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.161 [INFO][6145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" HandleID="k8s-pod-network.d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Workload="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.162 [INFO][6145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.162 [INFO][6145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.165 [WARNING][6145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" HandleID="k8s-pod-network.d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Workload="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.165 [INFO][6145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" HandleID="k8s-pod-network.d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Workload="localhost-k8s-whisker--7d6c4dd986--p2dsm-eth0" Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.166 [INFO][6145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:03:25.169194 containerd[1551]: 2025-07-07 00:03:25.167 [INFO][6138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680" Jul 7 00:03:25.169854 containerd[1551]: time="2025-07-07T00:03:25.169216863Z" level=info msg="TearDown network for sandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\" successfully" Jul 7 00:03:25.171277 containerd[1551]: time="2025-07-07T00:03:25.171257792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:03:25.171319 containerd[1551]: time="2025-07-07T00:03:25.171294171Z" level=info msg="RemovePodSandbox \"d3751a532a623b6bcc01faa10f06123f6ceb511ecd861b6462c3effad6f4d680\" returns successfully" Jul 7 00:03:32.759482 systemd[1]: Started sshd@7-139.178.70.105:22-139.178.68.195:34620.service - OpenSSH per-connection server daemon (139.178.68.195:34620). Jul 7 00:03:33.749148 sshd[6155]: Accepted publickey for core from 139.178.68.195 port 34620 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:03:33.790403 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:33.879947 systemd-logind[1521]: New session 10 of user core. Jul 7 00:03:33.885269 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:03:34.655313 sshd[6155]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:34.669886 systemd[1]: sshd@7-139.178.70.105:22-139.178.68.195:34620.service: Deactivated successfully. Jul 7 00:03:34.670041 systemd-logind[1521]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:03:34.672648 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:03:34.678161 systemd-logind[1521]: Removed session 10. Jul 7 00:03:39.674417 systemd[1]: Started sshd@8-139.178.70.105:22-139.178.68.195:38076.service - OpenSSH per-connection server daemon (139.178.68.195:38076). Jul 7 00:03:39.794741 sshd[6173]: Accepted publickey for core from 139.178.68.195 port 38076 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:03:39.797114 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:39.800771 systemd-logind[1521]: New session 11 of user core. Jul 7 00:03:39.805238 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:03:39.962256 sshd[6173]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:39.964273 systemd-logind[1521]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:03:39.964449 systemd[1]: sshd@8-139.178.70.105:22-139.178.68.195:38076.service: Deactivated successfully. Jul 7 00:03:39.965701 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:03:39.967498 systemd-logind[1521]: Removed session 11. Jul 7 00:03:44.984384 systemd[1]: Started sshd@9-139.178.70.105:22-139.178.68.195:38090.service - OpenSSH per-connection server daemon (139.178.68.195:38090). Jul 7 00:03:46.182173 sshd[6213]: Accepted publickey for core from 139.178.68.195 port 38090 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:03:46.226739 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:46.258781 systemd-logind[1521]: New session 12 of user core. Jul 7 00:03:46.263227 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:03:47.907424 sshd[6213]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:47.913813 systemd[1]: sshd@9-139.178.70.105:22-139.178.68.195:38090.service: Deactivated successfully. Jul 7 00:03:47.915617 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:03:47.917198 systemd-logind[1521]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:03:47.923370 systemd[1]: Started sshd@10-139.178.70.105:22-139.178.68.195:38094.service - OpenSSH per-connection server daemon (139.178.68.195:38094). Jul 7 00:03:47.930053 systemd-logind[1521]: Removed session 12. Jul 7 00:03:47.972441 sshd[6227]: Accepted publickey for core from 139.178.68.195 port 38094 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:03:47.973423 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:47.978397 systemd-logind[1521]: New session 13 of user core. Jul 7 00:03:47.983456 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:03:48.239325 sshd[6227]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:48.247936 systemd[1]: sshd@10-139.178.70.105:22-139.178.68.195:38094.service: Deactivated successfully. Jul 7 00:03:48.249166 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:03:48.250227 systemd-logind[1521]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:03:48.255334 systemd[1]: Started sshd@11-139.178.70.105:22-139.178.68.195:46040.service - OpenSSH per-connection server daemon (139.178.68.195:46040). Jul 7 00:03:48.257792 systemd-logind[1521]: Removed session 13. Jul 7 00:03:48.292121 sshd[6238]: Accepted publickey for core from 139.178.68.195 port 46040 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:03:48.293487 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:48.296710 systemd-logind[1521]: New session 14 of user core. Jul 7 00:03:48.302267 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:03:48.435166 sshd[6238]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:48.438090 systemd[1]: sshd@11-139.178.70.105:22-139.178.68.195:46040.service: Deactivated successfully. Jul 7 00:03:48.439676 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:03:48.440646 systemd-logind[1521]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:03:48.441714 systemd-logind[1521]: Removed session 14. Jul 7 00:03:53.577352 systemd[1]: Started sshd@12-139.178.70.105:22-139.178.68.195:46054.service - OpenSSH per-connection server daemon (139.178.68.195:46054). Jul 7 00:03:53.922269 sshd[6301]: Accepted publickey for core from 139.178.68.195 port 46054 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:03:53.933306 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:53.958798 systemd-logind[1521]: New session 15 of user core. Jul 7 00:03:53.963966 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:03:55.291878 sshd[6301]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:55.316270 systemd[1]: sshd@12-139.178.70.105:22-139.178.68.195:46054.service: Deactivated successfully. Jul 7 00:03:55.318242 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:03:55.320489 systemd-logind[1521]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:03:55.321031 systemd-logind[1521]: Removed session 15. Jul 7 00:04:00.392342 systemd[1]: Started sshd@13-139.178.70.105:22-139.178.68.195:40356.service - OpenSSH per-connection server daemon (139.178.68.195:40356). Jul 7 00:04:00.607424 sshd[6315]: Accepted publickey for core from 139.178.68.195 port 40356 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:04:00.609020 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:00.613369 systemd-logind[1521]: New session 16 of user core. Jul 7 00:04:00.620143 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:04:01.420427 sshd[6315]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:01.424046 systemd[1]: sshd@13-139.178.70.105:22-139.178.68.195:40356.service: Deactivated successfully. Jul 7 00:04:01.436876 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:04:01.438247 systemd-logind[1521]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:04:01.439109 systemd-logind[1521]: Removed session 16. Jul 7 00:04:06.440340 systemd[1]: Started sshd@14-139.178.70.105:22-139.178.68.195:40358.service - OpenSSH per-connection server daemon (139.178.68.195:40358). Jul 7 00:04:06.724137 sshd[6329]: Accepted publickey for core from 139.178.68.195 port 40358 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:04:06.727151 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:06.732910 systemd-logind[1521]: New session 17 of user core. Jul 7 00:04:06.737699 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:04:08.253218 sshd[6329]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:08.259907 systemd[1]: sshd@14-139.178.70.105:22-139.178.68.195:40358.service: Deactivated successfully. Jul 7 00:04:08.262032 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:04:08.263677 systemd-logind[1521]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:04:08.270800 systemd[1]: Started sshd@15-139.178.70.105:22-139.178.68.195:46702.service - OpenSSH per-connection server daemon (139.178.68.195:46702). Jul 7 00:04:08.272887 systemd-logind[1521]: Removed session 17. Jul 7 00:04:08.337240 sshd[6342]: Accepted publickey for core from 139.178.68.195 port 46702 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:04:08.338249 sshd[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:08.341556 systemd-logind[1521]: New session 18 of user core. Jul 7 00:04:08.348225 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:04:08.760748 sshd[6342]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:08.766344 systemd[1]: sshd@15-139.178.70.105:22-139.178.68.195:46702.service: Deactivated successfully. Jul 7 00:04:08.768100 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:04:08.771873 systemd-logind[1521]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:04:08.779847 systemd[1]: Started sshd@16-139.178.70.105:22-139.178.68.195:46710.service - OpenSSH per-connection server daemon (139.178.68.195:46710). Jul 7 00:04:08.782080 systemd-logind[1521]: Removed session 18. Jul 7 00:04:08.843725 sshd[6353]: Accepted publickey for core from 139.178.68.195 port 46710 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:04:08.844788 sshd[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:08.847552 systemd-logind[1521]: New session 19 of user core. Jul 7 00:04:08.853461 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:04:10.001930 sshd[6353]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:10.006246 systemd[1]: Started sshd@17-139.178.70.105:22-139.178.68.195:46724.service - OpenSSH per-connection server daemon (139.178.68.195:46724). Jul 7 00:04:10.012710 systemd[1]: sshd@16-139.178.70.105:22-139.178.68.195:46710.service: Deactivated successfully. Jul 7 00:04:10.013802 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:04:10.014836 systemd-logind[1521]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:04:10.016930 systemd-logind[1521]: Removed session 19. Jul 7 00:04:10.132527 sshd[6369]: Accepted publickey for core from 139.178.68.195 port 46724 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:04:10.142474 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:10.151212 systemd-logind[1521]: New session 20 of user core. Jul 7 00:04:10.156282 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:04:13.126446 sshd[6369]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:13.195722 systemd[1]: sshd@17-139.178.70.105:22-139.178.68.195:46724.service: Deactivated successfully. Jul 7 00:04:13.199100 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:04:13.202402 systemd-logind[1521]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:04:13.221367 systemd[1]: Started sshd@18-139.178.70.105:22-139.178.68.195:46730.service - OpenSSH per-connection server daemon (139.178.68.195:46730). Jul 7 00:04:13.224691 systemd-logind[1521]: Removed session 20. Jul 7 00:04:13.403760 sshd[6426]: Accepted publickey for core from 139.178.68.195 port 46730 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:04:13.411740 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:13.424806 systemd-logind[1521]: New session 21 of user core. Jul 7 00:04:13.427601 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:04:15.336525 sshd[6426]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:15.348429 systemd[1]: sshd@18-139.178.70.105:22-139.178.68.195:46730.service: Deactivated successfully. Jul 7 00:04:15.350075 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:04:15.352672 systemd-logind[1521]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:04:15.354510 systemd-logind[1521]: Removed session 21. Jul 7 00:04:20.355152 systemd[1]: run-containerd-runc-k8s.io-5705b7b2a6a88c64e8d153b17788ac4198a761e76802593edfa38cbca088f0f4-runc.IQpbde.mount: Deactivated successfully. Jul 7 00:04:20.404396 systemd[1]: Started sshd@19-139.178.70.105:22-139.178.68.195:49576.service - OpenSSH per-connection server daemon (139.178.68.195:49576). Jul 7 00:04:20.571608 sshd[6483]: Accepted publickey for core from 139.178.68.195 port 49576 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 7 00:04:20.574465 sshd[6483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:20.585874 systemd-logind[1521]: New session 22 of user core. Jul 7 00:04:20.590439 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:04:21.654297 sshd[6483]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:21.662529 systemd[1]: sshd@19-139.178.70.105:22-139.178.68.195:49576.service: Deactivated successfully. Jul 7 00:04:21.664399 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:04:21.665556 systemd-logind[1521]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:04:21.666579 systemd-logind[1521]: Removed session 22.