Jul 11 00:10:41.751355 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:46:23 -00 2025 Jul 11 00:10:41.751373 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:10:41.751379 kernel: Disabled fast string operations Jul 11 00:10:41.751383 kernel: BIOS-provided physical RAM map: Jul 11 00:10:41.751387 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 11 00:10:41.751391 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 11 00:10:41.751397 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 11 00:10:41.751402 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 11 00:10:41.751406 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 11 00:10:41.751410 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 11 00:10:41.751414 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 11 00:10:41.751418 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 11 00:10:41.751422 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 11 00:10:41.751427 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 11 00:10:41.751433 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 11 00:10:41.751438 kernel: NX (Execute Disable) protection: active Jul 11 00:10:41.751443 kernel: APIC: Static calls initialized Jul 11 00:10:41.751448 kernel: SMBIOS 2.7 present. Jul 11 00:10:41.751453 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 11 00:10:41.751457 kernel: vmware: hypercall mode: 0x00 Jul 11 00:10:41.751462 kernel: Hypervisor detected: VMware Jul 11 00:10:41.751467 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 11 00:10:41.751473 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 11 00:10:41.751477 kernel: vmware: using clock offset of 2777504175 ns Jul 11 00:10:41.751483 kernel: tsc: Detected 3408.000 MHz processor Jul 11 00:10:41.751488 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:10:41.751493 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:10:41.751498 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 11 00:10:41.751503 kernel: total RAM covered: 3072M Jul 11 00:10:41.751508 kernel: Found optimal setting for mtrr clean up Jul 11 00:10:41.751515 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 11 00:10:41.751521 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 11 00:10:41.751526 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:10:41.751531 kernel: Using GB pages for direct mapping Jul 11 00:10:41.751536 kernel: ACPI: Early table checksum verification disabled Jul 11 00:10:41.751541 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 11 00:10:41.751546 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 11 00:10:41.751551 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 11 00:10:41.751556 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 11 00:10:41.751561 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 11 00:10:41.751568 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 11 00:10:41.751573 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 11 00:10:41.751578 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 11 00:10:41.751584 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 11 00:10:41.751589 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 11 00:10:41.751595 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 11 00:10:41.751600 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 11 00:10:41.751605 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 11 00:10:41.751611 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 11 00:10:41.751616 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 11 00:10:41.751621 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 11 00:10:41.751626 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 11 00:10:41.751631 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 11 00:10:41.751636 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 11 00:10:41.751641 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 11 00:10:41.751648 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 11 00:10:41.751653 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 11 00:10:41.751658 kernel: system APIC only can use physical flat Jul 11 00:10:41.751663 kernel: APIC: Switched APIC routing to: physical flat Jul 11 00:10:41.751668 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 11 00:10:41.751673 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 11 00:10:41.751678 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 11 00:10:41.751683 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 11 00:10:41.751688 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 11 00:10:41.751694 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 11 00:10:41.751699 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 11 00:10:41.751704 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 11 00:10:41.751709 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 11 00:10:41.751714 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 11 00:10:41.751719 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 11 00:10:41.751724 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 11 00:10:41.751728 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 11 00:10:41.751733 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 11 00:10:41.751738 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 11 00:10:41.751743 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 11 00:10:41.751749 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 11 00:10:41.751755 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 11 00:10:41.751760 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 11 00:10:41.751765 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 11 00:10:41.751770 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 11 00:10:41.751775 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 11 00:10:41.751780 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 11 00:10:41.751785 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 11 00:10:41.751791 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 11 00:10:41.751799 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 11 00:10:41.751809 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 11 00:10:41.751814 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 11 00:10:41.751819 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 11 00:10:41.751824 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 11 00:10:41.751829 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 11 00:10:41.751836 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 11 00:10:41.751841 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 11 00:10:41.751846 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 11 00:10:41.751851 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 11 00:10:41.751856 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 11 00:10:41.751862 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 11 00:10:41.751867 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 11 00:10:41.751873 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 11 00:10:41.751878 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 11 00:10:41.751883 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 11 00:10:41.751888 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 11 00:10:41.751893 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 11 00:10:41.751898 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 11 00:10:41.751903 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 11 00:10:41.751908 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 11 00:10:41.751913 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 11 00:10:41.751919 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 11 00:10:41.751924 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 11 00:10:41.751929 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 11 00:10:41.751934 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 11 00:10:41.751939 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 11 00:10:41.751944 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 11 00:10:41.751949 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 11 00:10:41.751954 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 11 00:10:41.751959 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 11 00:10:41.751965 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 11 00:10:41.751970 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 11 00:10:41.751975 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 11 00:10:41.751984 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 11 00:10:41.751990 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 11 00:10:41.751995 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 11 00:10:41.752001 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 11 00:10:41.752006 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 11 00:10:41.752011 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 11 00:10:41.752018 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 11 00:10:41.752023 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 11 00:10:41.752028 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 11 00:10:41.752034 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 11 00:10:41.752039 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 11 00:10:41.752044 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 11 00:10:41.752050 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 11 00:10:41.752055 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 11 00:10:41.752061 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 11 00:10:41.752066 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 11 00:10:41.752073 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 11 00:10:41.752078 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 11 00:10:41.752084 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 11 00:10:41.752090 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 11 00:10:41.752095 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 11 00:10:41.752101 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 11 00:10:41.752106 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 11 00:10:41.752112 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 11 00:10:41.752118 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 11 00:10:41.752124 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 11 00:10:41.752131 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 11 00:10:41.752136 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 11 00:10:41.752142 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 11 00:10:41.752147 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 11 00:10:41.752152 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 11 00:10:41.752158 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 11 00:10:41.752163 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 11 00:10:41.752168 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 11 00:10:41.752174 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 11 00:10:41.752179 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 11 00:10:41.752184 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 11 00:10:41.752191 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 11 00:10:41.752197 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 11 00:10:41.752202 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 11 00:10:41.752207 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 11 00:10:41.752213 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 11 00:10:41.752218 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 11 00:10:41.752223 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 11 00:10:41.752229 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 11 00:10:41.752234 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 11 00:10:41.752239 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 11 00:10:41.754410 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 11 00:10:41.754423 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 11 00:10:41.754429 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 11 00:10:41.754434 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 11 00:10:41.754440 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 11 00:10:41.754445 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 11 00:10:41.754450 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 11 00:10:41.754456 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 11 00:10:41.754461 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 11 00:10:41.754466 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 11 00:10:41.754476 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 11 00:10:41.754481 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 11 00:10:41.754486 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 11 00:10:41.754492 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 11 00:10:41.754497 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 11 00:10:41.754503 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 11 00:10:41.754508 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 11 00:10:41.754513 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 11 00:10:41.754519 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 11 00:10:41.754524 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 11 00:10:41.754531 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 11 00:10:41.754536 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 11 00:10:41.754542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 11 00:10:41.754547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 11 00:10:41.754553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 11 00:10:41.754559 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 11 00:10:41.754564 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 11 00:10:41.754570 kernel: Zone ranges: Jul 11 00:10:41.754576 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:10:41.754583 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 11 00:10:41.754589 kernel: Normal empty Jul 11 00:10:41.754594 kernel: Movable zone start for each node Jul 11 00:10:41.754600 kernel: Early memory node ranges Jul 11 00:10:41.754605 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 11 00:10:41.754610 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 11 00:10:41.754616 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 11 00:10:41.754622 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 11 00:10:41.754631 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:10:41.754637 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 11 00:10:41.754657 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 11 00:10:41.754664 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 11 00:10:41.754670 kernel: system APIC only can use physical flat Jul 11 00:10:41.754675 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 11 00:10:41.754681 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 11 00:10:41.754687 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 11 00:10:41.754692 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 11 00:10:41.754698 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 11 00:10:41.754703 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 11 00:10:41.754708 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 11 00:10:41.754716 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 11 00:10:41.754722 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 11 00:10:41.754728 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 11 00:10:41.754733 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 11 00:10:41.754739 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 11 00:10:41.754744 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 11 00:10:41.754749 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 11 00:10:41.754755 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 11 00:10:41.754761 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 11 00:10:41.754767 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 11 00:10:41.754779 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 11 00:10:41.754785 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 11 00:10:41.754790 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 11 00:10:41.754796 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 11 00:10:41.754801 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 11 00:10:41.754806 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 11 00:10:41.754812 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 11 00:10:41.754817 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 11 00:10:41.754823 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 11 00:10:41.754830 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 11 00:10:41.754836 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 11 00:10:41.754841 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 11 00:10:41.754846 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 11 00:10:41.754852 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 11 00:10:41.754857 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 11 00:10:41.754863 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 11 00:10:41.754868 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 11 00:10:41.754873 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 11 00:10:41.754880 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 11 00:10:41.754885 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 11 00:10:41.754891 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 11 00:10:41.754896 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 11 00:10:41.754905 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 11 00:10:41.754911 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 11 00:10:41.754916 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 11 00:10:41.754922 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 11 00:10:41.754927 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 11 00:10:41.754936 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 11 00:10:41.754943 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 11 00:10:41.754949 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 11 00:10:41.754954 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 11 00:10:41.754959 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 11 00:10:41.754965 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 11 00:10:41.754970 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 11 00:10:41.754976 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 11 00:10:41.754981 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 11 00:10:41.754987 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 11 00:10:41.754992 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 11 00:10:41.754998 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 11 00:10:41.755004 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 11 00:10:41.755009 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 11 00:10:41.755015 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 11 00:10:41.755020 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 11 00:10:41.755025 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 11 00:10:41.755031 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 11 00:10:41.755036 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 11 00:10:41.755042 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 11 00:10:41.755048 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 11 00:10:41.755054 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 11 00:10:41.755059 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 11 00:10:41.755064 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 11 00:10:41.755070 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 11 00:10:41.755075 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 11 00:10:41.755080 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 11 00:10:41.755086 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 11 00:10:41.755091 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 11 00:10:41.755097 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 11 00:10:41.755103 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 11 00:10:41.755108 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 11 00:10:41.755114 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 11 00:10:41.755119 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 11 00:10:41.755125 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 11 00:10:41.755130 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 11 00:10:41.755136 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 11 00:10:41.755141 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 11 00:10:41.755147 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 11 00:10:41.755152 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 11 00:10:41.755159 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 11 00:10:41.755164 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 11 00:10:41.755170 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 11 00:10:41.755175 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 11 00:10:41.755181 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 11 00:10:41.755186 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 11 00:10:41.755191 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 11 00:10:41.755197 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 11 00:10:41.755202 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 11 00:10:41.755209 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 11 00:10:41.755214 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 11 00:10:41.755220 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 11 00:10:41.755226 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 11 00:10:41.755231 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 11 00:10:41.755236 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 11 00:10:41.755242 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 11 00:10:41.756317 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 11 00:10:41.756327 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 11 00:10:41.756333 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 11 00:10:41.756341 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 11 00:10:41.756347 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 11 00:10:41.756352 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 11 00:10:41.756358 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 11 00:10:41.756363 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 11 00:10:41.756368 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 11 00:10:41.756374 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 11 00:10:41.756379 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 11 00:10:41.756385 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 11 00:10:41.756391 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 11 00:10:41.756397 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 11 00:10:41.756403 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 11 00:10:41.756408 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 11 00:10:41.756414 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 11 00:10:41.756419 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 11 00:10:41.756425 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 11 00:10:41.756431 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 11 00:10:41.756436 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 11 00:10:41.756442 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 11 00:10:41.756448 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 11 00:10:41.756454 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 11 00:10:41.756459 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 11 00:10:41.756464 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 11 00:10:41.756470 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 11 00:10:41.756475 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 11 00:10:41.756480 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:10:41.756486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 11 00:10:41.756492 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:10:41.756497 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 11 00:10:41.756504 kernel: TSC deadline timer available Jul 11 00:10:41.756510 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 11 00:10:41.756515 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 11 00:10:41.756521 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 11 00:10:41.756526 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:10:41.756532 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 11 00:10:41.756538 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 11 00:10:41.756543 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 11 00:10:41.756549 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 11 00:10:41.756567 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 11 00:10:41.756574 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 11 00:10:41.756580 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 11 00:10:41.756591 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 11 00:10:41.756606 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 11 00:10:41.756613 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 11 00:10:41.756619 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 11 00:10:41.756625 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 11 00:10:41.756631 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 11 00:10:41.756637 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 11 00:10:41.756643 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 11 00:10:41.756649 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 11 00:10:41.756655 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 11 00:10:41.756660 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 11 00:10:41.756666 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 11 00:10:41.756673 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:10:41.756679 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:10:41.756686 kernel: random: crng init done Jul 11 00:10:41.756692 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 11 00:10:41.756698 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 11 00:10:41.756704 kernel: printk: log_buf_len min size: 262144 bytes Jul 11 00:10:41.756710 kernel: printk: log_buf_len: 1048576 bytes Jul 11 00:10:41.756715 kernel: printk: early log buf free: 239648(91%) Jul 11 00:10:41.756721 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:10:41.756727 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 11 00:10:41.756733 kernel: Fallback order for Node 0: 0 Jul 11 00:10:41.756740 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 11 00:10:41.756745 kernel: Policy zone: DMA32 Jul 11 00:10:41.756751 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:10:41.756758 kernel: Memory: 1936348K/2096628K available (12288K kernel code, 2295K rwdata, 22744K rodata, 42872K init, 2320K bss, 160020K reserved, 0K cma-reserved) Jul 11 00:10:41.756765 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 11 00:10:41.756772 kernel: ftrace: allocating 37966 entries in 149 pages Jul 11 00:10:41.756778 kernel: ftrace: allocated 149 pages with 4 groups Jul 11 00:10:41.756784 kernel: Dynamic Preempt: voluntary Jul 11 00:10:41.756789 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:10:41.756796 kernel: rcu: RCU event tracing is enabled. Jul 11 00:10:41.756802 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 11 00:10:41.756808 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:10:41.756814 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:10:41.756819 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:10:41.756825 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:10:41.756832 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 11 00:10:41.756838 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 11 00:10:41.756844 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 11 00:10:41.756850 kernel: Console: colour VGA+ 80x25 Jul 11 00:10:41.756856 kernel: printk: console [tty0] enabled Jul 11 00:10:41.756862 kernel: printk: console [ttyS0] enabled Jul 11 00:10:41.756867 kernel: ACPI: Core revision 20230628 Jul 11 00:10:41.756873 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 11 00:10:41.756880 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:10:41.756887 kernel: x2apic enabled Jul 11 00:10:41.756893 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:10:41.756899 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:10:41.756905 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 11 00:10:41.756911 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 11 00:10:41.756918 kernel: Disabled fast string operations Jul 11 00:10:41.756924 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 11 00:10:41.756930 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 11 00:10:41.756935 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:10:41.756943 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 11 00:10:41.756949 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 11 00:10:41.756954 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 11 00:10:41.756960 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 11 00:10:41.756966 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 11 00:10:41.756972 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:10:41.756978 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:10:41.756984 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 11 00:10:41.756990 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 11 00:10:41.756997 kernel: GDS: Unknown: Dependent on hypervisor status Jul 11 00:10:41.757002 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 11 00:10:41.757008 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:10:41.757014 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:10:41.757020 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:10:41.757026 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:10:41.757032 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:10:41.757037 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:10:41.757043 kernel: pid_max: default: 131072 minimum: 1024 Jul 11 00:10:41.757050 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:10:41.757056 kernel: landlock: Up and running. Jul 11 00:10:41.757062 kernel: SELinux: Initializing. Jul 11 00:10:41.757068 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 11 00:10:41.757073 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 11 00:10:41.757079 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 11 00:10:41.757085 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 11 00:10:41.757091 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 11 00:10:41.757098 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 11 00:10:41.757104 kernel: Performance Events: Skylake events, core PMU driver. Jul 11 00:10:41.757110 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 11 00:10:41.757116 kernel: core: CPUID marked event: 'instructions' unavailable Jul 11 00:10:41.757122 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 11 00:10:41.757127 kernel: core: CPUID marked event: 'cache references' unavailable Jul 11 00:10:41.757133 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 11 00:10:41.757138 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 11 00:10:41.757145 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 11 00:10:41.757151 kernel: ... version: 1 Jul 11 00:10:41.757158 kernel: ... bit width: 48 Jul 11 00:10:41.757163 kernel: ... generic registers: 4 Jul 11 00:10:41.757169 kernel: ... value mask: 0000ffffffffffff Jul 11 00:10:41.757175 kernel: ... max period: 000000007fffffff Jul 11 00:10:41.757181 kernel: ... fixed-purpose events: 0 Jul 11 00:10:41.757187 kernel: ... event mask: 000000000000000f Jul 11 00:10:41.757192 kernel: signal: max sigframe size: 1776 Jul 11 00:10:41.757198 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:10:41.757205 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:10:41.757211 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 11 00:10:41.757217 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:10:41.757223 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:10:41.757229 kernel: .... node #0, CPUs: #1 Jul 11 00:10:41.757235 kernel: Disabled fast string operations Jul 11 00:10:41.757240 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 11 00:10:41.758262 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 11 00:10:41.758270 kernel: smp: Brought up 1 node, 2 CPUs Jul 11 00:10:41.758277 kernel: smpboot: Max logical packages: 128 Jul 11 00:10:41.758285 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 11 00:10:41.758293 kernel: devtmpfs: initialized Jul 11 00:10:41.758299 kernel: x86/mm: Memory block size: 128MB Jul 11 00:10:41.758305 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 11 00:10:41.758311 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:10:41.758317 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 11 00:10:41.758323 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:10:41.758329 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:10:41.758335 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:10:41.758342 kernel: audit: type=2000 audit(1752192639.087:1): state=initialized audit_enabled=0 res=1 Jul 11 00:10:41.758348 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:10:41.758354 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:10:41.758360 kernel: cpuidle: using governor menu Jul 11 00:10:41.758366 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 11 00:10:41.758372 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:10:41.758378 kernel: dca service started, version 1.12.1 Jul 11 00:10:41.758384 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 11 00:10:41.758390 kernel: PCI: Using configuration type 1 for base access Jul 11 00:10:41.758397 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:10:41.758403 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:10:41.758409 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:10:41.758415 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:10:41.758420 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:10:41.758427 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:10:41.758432 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:10:41.758439 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:10:41.758444 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:10:41.758451 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 11 00:10:41.758457 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 11 00:10:41.758463 kernel: ACPI: Interpreter enabled Jul 11 00:10:41.758469 kernel: ACPI: PM: (supports S0 S1 S5) Jul 11 00:10:41.758475 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:10:41.758481 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:10:41.758487 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:10:41.758493 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 11 00:10:41.758499 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 11 00:10:41.758594 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:10:41.758653 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 11 00:10:41.758705 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 11 00:10:41.758713 kernel: PCI host bridge to bus 0000:00 Jul 11 00:10:41.758766 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:10:41.758813 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 11 00:10:41.758862 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:10:41.758908 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:10:41.758964 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 11 00:10:41.759011 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 11 00:10:41.759073 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 11 00:10:41.759133 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 11 00:10:41.759194 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 11 00:10:41.760019 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 11 00:10:41.760086 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 11 00:10:41.760143 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 11 00:10:41.760197 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 11 00:10:41.760289 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 11 00:10:41.760349 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 11 00:10:41.760410 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 11 00:10:41.760463 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 11 00:10:41.760515 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 11 00:10:41.760572 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 11 00:10:41.760625 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 11 00:10:41.760676 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 11 00:10:41.760734 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 11 00:10:41.760787 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 11 00:10:41.760838 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 11 00:10:41.760889 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 11 00:10:41.760940 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 11 00:10:41.760991 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:10:41.761047 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 11 00:10:41.761111 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.761163 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.761220 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.763961 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764030 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764088 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764149 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764203 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764328 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764384 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764440 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764493 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764550 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764605 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764661 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764714 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764770 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764824 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764883 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764941 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764998 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765050 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765108 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765161 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765219 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765283 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765339 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765392 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765448 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765501 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765561 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765614 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765670 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765722 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765779 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765832 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765889 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765963 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.766023 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.766077 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.766133 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.766187 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.766243 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.768606 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.768666 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.768720 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.768777 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.768830 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.768886 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.768943 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770329 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770402 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770463 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770516 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770573 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770629 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770685 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770737 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770793 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770844 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770899 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770954 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.771008 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.771059 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.771117 kernel: pci_bus 0000:01: extended config space not accessible Jul 11 00:10:41.771171 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 11 00:10:41.771223 kernel: pci_bus 0000:02: extended config space not accessible Jul 11 00:10:41.771233 kernel: acpiphp: Slot [32] registered Jul 11 00:10:41.771241 kernel: acpiphp: Slot [33] registered Jul 11 00:10:41.771294 kernel: acpiphp: Slot [34] registered Jul 11 00:10:41.771302 kernel: acpiphp: Slot [35] registered Jul 11 00:10:41.771307 kernel: acpiphp: Slot [36] registered Jul 11 00:10:41.771313 kernel: acpiphp: Slot [37] registered Jul 11 00:10:41.771319 kernel: acpiphp: Slot [38] registered Jul 11 00:10:41.771325 kernel: acpiphp: Slot [39] registered Jul 11 00:10:41.771331 kernel: acpiphp: Slot [40] registered Jul 11 00:10:41.771337 kernel: acpiphp: Slot [41] registered Jul 11 00:10:41.771345 kernel: acpiphp: Slot [42] registered Jul 11 00:10:41.771351 kernel: acpiphp: Slot [43] registered Jul 11 00:10:41.771356 kernel: acpiphp: Slot [44] registered Jul 11 00:10:41.771362 kernel: acpiphp: Slot [45] registered Jul 11 00:10:41.771368 kernel: acpiphp: Slot [46] registered Jul 11 00:10:41.771374 kernel: acpiphp: Slot [47] registered Jul 11 00:10:41.771379 kernel: acpiphp: Slot [48] registered Jul 11 00:10:41.771385 kernel: acpiphp: Slot [49] registered Jul 11 00:10:41.771391 kernel: acpiphp: Slot [50] registered Jul 11 00:10:41.771397 kernel: acpiphp: Slot [51] registered Jul 11 00:10:41.771404 kernel: acpiphp: Slot [52] registered Jul 11 00:10:41.771410 kernel: acpiphp: Slot [53] registered Jul 11 00:10:41.771416 kernel: acpiphp: Slot [54] registered Jul 11 00:10:41.771421 kernel: acpiphp: Slot [55] registered Jul 11 00:10:41.771427 kernel: acpiphp: Slot [56] registered Jul 11 00:10:41.771433 kernel: acpiphp: Slot [57] registered Jul 11 00:10:41.771439 kernel: acpiphp: Slot [58] registered Jul 11 00:10:41.771445 kernel: acpiphp: Slot [59] registered Jul 11 00:10:41.771451 kernel: acpiphp: Slot [60] registered Jul 11 00:10:41.771457 kernel: acpiphp: Slot [61] registered Jul 11 00:10:41.771463 kernel: acpiphp: Slot [62] registered Jul 11 00:10:41.771469 kernel: acpiphp: Slot [63] registered Jul 11 00:10:41.771528 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 11 00:10:41.771580 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 11 00:10:41.771631 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 11 00:10:41.771682 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 11 00:10:41.771732 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 11 00:10:41.771785 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 11 00:10:41.771836 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 11 00:10:41.771887 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 11 00:10:41.771939 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 11 00:10:41.771997 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 11 00:10:41.772053 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 11 00:10:41.772106 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 11 00:10:41.772161 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 11 00:10:41.772214 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.772275 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 11 00:10:41.772329 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 11 00:10:41.772381 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 11 00:10:41.772433 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 11 00:10:41.772486 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 11 00:10:41.772538 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 11 00:10:41.772592 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 11 00:10:41.772643 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 11 00:10:41.772697 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 11 00:10:41.772750 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 11 00:10:41.772801 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 11 00:10:41.772852 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 11 00:10:41.772906 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 11 00:10:41.772965 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 11 00:10:41.773016 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 11 00:10:41.773069 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 11 00:10:41.773121 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 11 00:10:41.773173 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 11 00:10:41.773228 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 11 00:10:41.773330 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 11 00:10:41.773383 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 11 00:10:41.773436 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 11 00:10:41.773486 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 11 00:10:41.773536 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 11 00:10:41.773588 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 11 00:10:41.773639 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 11 00:10:41.773692 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 11 00:10:41.773750 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 11 00:10:41.773803 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 11 00:10:41.773855 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 11 00:10:41.773907 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 11 00:10:41.773961 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 11 00:10:41.774014 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 11 00:10:41.774069 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 11 00:10:41.774123 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 11 00:10:41.774175 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 11 00:10:41.774227 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 11 00:10:41.774292 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 11 00:10:41.774347 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 11 00:10:41.774401 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 11 00:10:41.774452 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 11 00:10:41.774508 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 11 00:10:41.774558 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 11 00:10:41.774613 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 11 00:10:41.774664 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 11 00:10:41.774715 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 11 00:10:41.774765 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 11 00:10:41.774819 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 11 00:10:41.774870 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 11 00:10:41.774924 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 11 00:10:41.774976 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 11 00:10:41.775028 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 11 00:10:41.775080 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 11 00:10:41.775133 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 11 00:10:41.775184 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 11 00:10:41.775236 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 11 00:10:41.775335 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 11 00:10:41.775391 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 11 00:10:41.775443 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 11 00:10:41.775496 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 11 00:10:41.775547 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 11 00:10:41.775598 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 11 00:10:41.775650 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 11 00:10:41.775701 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 11 00:10:41.775753 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 11 00:10:41.775807 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 11 00:10:41.775861 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 11 00:10:41.775912 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 11 00:10:41.775971 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 11 00:10:41.776023 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 11 00:10:41.776075 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 11 00:10:41.776126 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 11 00:10:41.776179 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 11 00:10:41.776231 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 11 00:10:41.776296 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 11 00:10:41.776349 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 11 00:10:41.776400 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 11 00:10:41.776454 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 11 00:10:41.776505 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 11 00:10:41.776556 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 11 00:10:41.776612 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 11 00:10:41.776663 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 11 00:10:41.776715 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 11 00:10:41.776768 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 11 00:10:41.776819 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 11 00:10:41.776871 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 11 00:10:41.776923 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 11 00:10:41.776974 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 11 00:10:41.777029 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 11 00:10:41.777082 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 11 00:10:41.777134 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 11 00:10:41.777185 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 11 00:10:41.777237 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 11 00:10:41.777304 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 11 00:10:41.777357 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 11 00:10:41.777409 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 11 00:10:41.777464 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 11 00:10:41.777517 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 11 00:10:41.777569 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 11 00:10:41.777620 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 11 00:10:41.777672 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 11 00:10:41.777724 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 11 00:10:41.777775 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 11 00:10:41.777830 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 11 00:10:41.777881 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 11 00:10:41.777932 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 11 00:10:41.777985 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 11 00:10:41.778036 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 11 00:10:41.778087 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 11 00:10:41.778140 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 11 00:10:41.778192 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 11 00:10:41.778251 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 11 00:10:41.778305 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 11 00:10:41.778356 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 11 00:10:41.778408 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 11 00:10:41.778417 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 11 00:10:41.778423 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 11 00:10:41.778429 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 11 00:10:41.778435 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:10:41.778441 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 11 00:10:41.778449 kernel: iommu: Default domain type: Translated Jul 11 00:10:41.778455 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:10:41.778461 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:10:41.778467 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:10:41.778473 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 11 00:10:41.778479 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 11 00:10:41.778531 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 11 00:10:41.778583 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 11 00:10:41.778634 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:10:41.778645 kernel: vgaarb: loaded Jul 11 00:10:41.778652 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 11 00:10:41.778658 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 11 00:10:41.778664 kernel: clocksource: Switched to clocksource tsc-early Jul 11 00:10:41.778670 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:10:41.778676 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:10:41.778682 kernel: pnp: PnP ACPI init Jul 11 00:10:41.778736 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 11 00:10:41.778788 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 11 00:10:41.778835 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 11 00:10:41.778886 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 11 00:10:41.778950 kernel: pnp 00:06: [dma 2] Jul 11 00:10:41.779003 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 11 00:10:41.779051 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 11 00:10:41.779098 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 11 00:10:41.779109 kernel: pnp: PnP ACPI: found 8 devices Jul 11 00:10:41.779115 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:10:41.779121 kernel: NET: Registered PF_INET protocol family Jul 11 00:10:41.779127 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:10:41.779133 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 11 00:10:41.779139 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:10:41.779145 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 11 00:10:41.779151 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 11 00:10:41.779158 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 11 00:10:41.779164 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 11 00:10:41.779170 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 11 00:10:41.779176 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:10:41.779182 kernel: NET: Registered PF_XDP protocol family Jul 11 00:10:41.779235 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 11 00:10:41.779403 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 11 00:10:41.779456 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 11 00:10:41.779511 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 11 00:10:41.779563 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 11 00:10:41.779614 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 11 00:10:41.779665 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 11 00:10:41.779717 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 11 00:10:41.779768 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 11 00:10:41.779822 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 11 00:10:41.779874 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 11 00:10:41.779926 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 11 00:10:41.779977 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 11 00:10:41.780028 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 11 00:10:41.780081 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 11 00:10:41.780132 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 11 00:10:41.780183 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 11 00:10:41.780235 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 11 00:10:41.780294 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 11 00:10:41.780345 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 11 00:10:41.780400 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 11 00:10:41.780451 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 11 00:10:41.780503 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 11 00:10:41.780554 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 11 00:10:41.780605 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 11 00:10:41.780657 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.780708 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.780762 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.780813 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.780865 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.780916 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.780968 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781018 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781070 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781121 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781175 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781226 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781291 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781343 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781394 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781445 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781496 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781548 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781603 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781654 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781706 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781757 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781808 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781859 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781909 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781965 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782019 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782070 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782121 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782172 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782223 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782285 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782337 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782387 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782441 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782492 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782543 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782594 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782645 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782695 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782747 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782797 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782852 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782903 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782953 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783004 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783055 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783106 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783156 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783207 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783269 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783323 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783379 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783429 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783480 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783531 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783582 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783633 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783684 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783735 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783786 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783845 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783896 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783946 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783998 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784049 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784100 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784151 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784201 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784260 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784313 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784367 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784419 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784470 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784521 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784573 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784624 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784675 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784725 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784776 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784831 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784883 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784947 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784999 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.785052 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 11 00:10:41.785105 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 11 00:10:41.785157 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 11 00:10:41.785208 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 11 00:10:41.785365 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 11 00:10:41.785426 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 11 00:10:41.785480 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 11 00:10:41.785531 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 11 00:10:41.785583 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 11 00:10:41.785635 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 11 00:10:41.785687 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 11 00:10:41.785739 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 11 00:10:41.785790 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 11 00:10:41.785841 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 11 00:10:41.785897 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 11 00:10:41.785960 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 11 00:10:41.786013 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 11 00:10:41.786064 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 11 00:10:41.786115 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 11 00:10:41.786166 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 11 00:10:41.786216 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 11 00:10:41.786324 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 11 00:10:41.786377 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 11 00:10:41.786431 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 11 00:10:41.786485 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 11 00:10:41.786536 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 11 00:10:41.786587 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 11 00:10:41.786637 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 11 00:10:41.786689 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 11 00:10:41.786743 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 11 00:10:41.786794 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 11 00:10:41.786844 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 11 00:10:41.786896 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 11 00:10:41.786952 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 11 00:10:41.787005 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 11 00:10:41.787056 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 11 00:10:41.787107 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 11 00:10:41.787159 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 11 00:10:41.787214 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 11 00:10:41.787272 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 11 00:10:41.787324 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 11 00:10:41.787377 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 11 00:10:41.787430 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 11 00:10:41.787482 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 11 00:10:41.787533 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 11 00:10:41.787585 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 11 00:10:41.787636 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 11 00:10:41.787690 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 11 00:10:41.787741 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 11 00:10:41.787791 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 11 00:10:41.787842 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 11 00:10:41.787893 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 11 00:10:41.787944 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 11 00:10:41.787995 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 11 00:10:41.788045 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 11 00:10:41.788097 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 11 00:10:41.788147 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 11 00:10:41.788201 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 11 00:10:41.789293 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 11 00:10:41.789361 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 11 00:10:41.789418 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 11 00:10:41.789473 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 11 00:10:41.789526 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 11 00:10:41.789577 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 11 00:10:41.789629 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 11 00:10:41.789682 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 11 00:10:41.789738 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 11 00:10:41.789790 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 11 00:10:41.789840 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 11 00:10:41.789893 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 11 00:10:41.789945 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 11 00:10:41.789996 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 11 00:10:41.790047 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 11 00:10:41.790100 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 11 00:10:41.790152 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 11 00:10:41.790203 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 11 00:10:41.790275 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 11 00:10:41.790328 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 11 00:10:41.790379 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 11 00:10:41.790431 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 11 00:10:41.790482 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 11 00:10:41.790534 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 11 00:10:41.790587 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 11 00:10:41.790639 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 11 00:10:41.790690 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 11 00:10:41.790746 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 11 00:10:41.790797 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 11 00:10:41.790848 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 11 00:10:41.790902 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 11 00:10:41.790954 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 11 00:10:41.791005 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 11 00:10:41.791056 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 11 00:10:41.791109 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 11 00:10:41.791161 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 11 00:10:41.791212 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 11 00:10:41.791792 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 11 00:10:41.791854 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 11 00:10:41.791908 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 11 00:10:41.792314 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 11 00:10:41.792375 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 11 00:10:41.792430 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 11 00:10:41.792483 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 11 00:10:41.792537 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 11 00:10:41.792589 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 11 00:10:41.792644 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 11 00:10:41.792699 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 11 00:10:41.792751 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 11 00:10:41.792809 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 11 00:10:41.792862 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 11 00:10:41.792919 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 11 00:10:41.792975 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 11 00:10:41.793028 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 11 00:10:41.793081 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 11 00:10:41.793153 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 11 00:10:41.793210 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 11 00:10:41.793622 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 11 00:10:41.793677 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 11 00:10:41.793724 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 11 00:10:41.793769 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 11 00:10:41.793820 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 11 00:10:41.793867 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 11 00:10:41.793916 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 11 00:10:41.793963 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 11 00:10:41.794008 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 11 00:10:41.794055 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 11 00:10:41.794102 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 11 00:10:41.794163 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 11 00:10:41.794215 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 11 00:10:41.794277 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 11 00:10:41.794329 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 11 00:10:41.794381 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 11 00:10:41.794429 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 11 00:10:41.794475 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 11 00:10:41.794526 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 11 00:10:41.794573 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 11 00:10:41.794623 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 11 00:10:41.794674 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 11 00:10:41.794722 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 11 00:10:41.794773 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 11 00:10:41.794822 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 11 00:10:41.794874 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 11 00:10:41.794925 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 11 00:10:41.794989 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 11 00:10:41.795038 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 11 00:10:41.795089 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 11 00:10:41.795137 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 11 00:10:41.795199 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 11 00:10:41.795256 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 11 00:10:41.795306 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 11 00:10:41.795357 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 11 00:10:41.795405 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 11 00:10:41.795452 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 11 00:10:41.795504 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 11 00:10:41.795553 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 11 00:10:41.795605 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 11 00:10:41.795657 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 11 00:10:41.795704 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 11 00:10:41.795756 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 11 00:10:41.795804 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 11 00:10:41.795859 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 11 00:10:41.795910 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 11 00:10:41.795976 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 11 00:10:41.796025 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 11 00:10:41.796076 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 11 00:10:41.796125 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 11 00:10:41.796176 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 11 00:10:41.796224 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 11 00:10:41.796770 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 11 00:10:41.796830 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 11 00:10:41.796881 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 11 00:10:41.796929 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 11 00:10:41.796981 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 11 00:10:41.797034 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 11 00:10:41.797086 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 11 00:10:41.797139 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 11 00:10:41.797187 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 11 00:10:41.797239 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 11 00:10:41.797303 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 11 00:10:41.797356 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 11 00:10:41.797404 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 11 00:10:41.797459 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 11 00:10:41.797507 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 11 00:10:41.797562 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 11 00:10:41.797610 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 11 00:10:41.797661 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 11 00:10:41.797710 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 11 00:10:41.797760 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 11 00:10:41.797811 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 11 00:10:41.797859 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 11 00:10:41.797907 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 11 00:10:41.797958 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 11 00:10:41.798006 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 11 00:10:41.798059 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 11 00:10:41.798108 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 11 00:10:41.798160 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 11 00:10:41.798209 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 11 00:10:41.798303 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 11 00:10:41.798352 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 11 00:10:41.798405 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 11 00:10:41.798453 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 11 00:10:41.798506 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 11 00:10:41.798769 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 11 00:10:41.798831 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 11 00:10:41.798841 kernel: PCI: CLS 32 bytes, default 64 Jul 11 00:10:41.798849 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 11 00:10:41.798858 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 11 00:10:41.798864 kernel: clocksource: Switched to clocksource tsc Jul 11 00:10:41.798871 kernel: Initialise system trusted keyrings Jul 11 00:10:41.798877 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 11 00:10:41.798883 kernel: Key type asymmetric registered Jul 11 00:10:41.798889 kernel: Asymmetric key parser 'x509' registered Jul 11 00:10:41.798896 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 11 00:10:41.798902 kernel: io scheduler mq-deadline registered Jul 11 00:10:41.798908 kernel: io scheduler kyber registered Jul 11 00:10:41.798916 kernel: io scheduler bfq registered Jul 11 00:10:41.798972 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 11 00:10:41.799025 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799080 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 11 00:10:41.799132 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799185 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 11 00:10:41.799240 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799411 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 11 00:10:41.799484 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799538 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 11 00:10:41.799589 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799642 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 11 00:10:41.799692 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799749 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 11 00:10:41.799799 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799851 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 11 00:10:41.799903 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799987 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 11 00:10:41.800037 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800091 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 11 00:10:41.800142 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800194 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 11 00:10:41.800399 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800485 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 11 00:10:41.800759 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800850 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 11 00:10:41.800907 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800967 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 11 00:10:41.801020 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.801075 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 11 00:10:41.801132 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.801187 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 11 00:10:41.801240 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.801515 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 11 00:10:41.801571 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.801626 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 11 00:10:41.801679 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802064 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 11 00:10:41.802125 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802181 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 11 00:10:41.802234 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802303 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 11 00:10:41.802359 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802417 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 11 00:10:41.802471 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802526 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 11 00:10:41.802578 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802633 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 11 00:10:41.802688 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802742 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 11 00:10:41.802795 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802848 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 11 00:10:41.802900 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802954 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 11 00:10:41.803010 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803063 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 11 00:10:41.803116 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803170 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 11 00:10:41.803223 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803337 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 11 00:10:41.803393 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803445 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 11 00:10:41.803498 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803550 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 11 00:10:41.803602 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803613 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:10:41.803620 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:10:41.803627 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:10:41.803633 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 11 00:10:41.803640 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:10:41.803646 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:10:41.803699 kernel: rtc_cmos 00:01: registered as rtc0 Jul 11 00:10:41.803748 kernel: rtc_cmos 00:01: setting system clock to 2025-07-11T00:10:41 UTC (1752192641) Jul 11 00:10:41.803798 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 11 00:10:41.803807 kernel: intel_pstate: CPU model not supported Jul 11 00:10:41.803813 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:10:41.803820 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:10:41.803826 kernel: Segment Routing with IPv6 Jul 11 00:10:41.803832 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:10:41.803838 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:10:41.803845 kernel: Key type dns_resolver registered Jul 11 00:10:41.803851 kernel: IPI shorthand broadcast: enabled Jul 11 00:10:41.803859 kernel: sched_clock: Marking stable (915393455, 226337408)->(1204330014, -62599151) Jul 11 00:10:41.803866 kernel: registered taskstats version 1 Jul 11 00:10:41.803872 kernel: Loading compiled-in X.509 certificates Jul 11 00:10:41.803878 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5956f0842928c96096c398e9db55919cd236a39f' Jul 11 00:10:41.803884 kernel: Key type .fscrypt registered Jul 11 00:10:41.803890 kernel: Key type fscrypt-provisioning registered Jul 11 00:10:41.803897 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:10:41.803903 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:10:41.803910 kernel: ima: No architecture policies found Jul 11 00:10:41.803916 kernel: clk: Disabling unused clocks Jul 11 00:10:41.803927 kernel: Freeing unused kernel image (initmem) memory: 42872K Jul 11 00:10:41.803933 kernel: Write protecting the kernel read-only data: 36864k Jul 11 00:10:41.803940 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Jul 11 00:10:41.803946 kernel: Run /init as init process Jul 11 00:10:41.803952 kernel: with arguments: Jul 11 00:10:41.803959 kernel: /init Jul 11 00:10:41.803965 kernel: with environment: Jul 11 00:10:41.803971 kernel: HOME=/ Jul 11 00:10:41.803978 kernel: TERM=linux Jul 11 00:10:41.803985 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:10:41.803992 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:10:41.804000 systemd[1]: Detected virtualization vmware. Jul 11 00:10:41.804007 systemd[1]: Detected architecture x86-64. Jul 11 00:10:41.804014 systemd[1]: Running in initrd. Jul 11 00:10:41.804020 systemd[1]: No hostname configured, using default hostname. Jul 11 00:10:41.804028 systemd[1]: Hostname set to . Jul 11 00:10:41.804035 systemd[1]: Initializing machine ID from random generator. Jul 11 00:10:41.804041 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:10:41.804048 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:10:41.804054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:10:41.804061 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:10:41.804068 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:10:41.804075 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:10:41.804082 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:10:41.804090 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:10:41.804097 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:10:41.804103 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:10:41.804110 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:10:41.804116 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:10:41.804123 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:10:41.804131 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:10:41.804138 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:10:41.804144 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:10:41.804151 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:10:41.804157 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:10:41.804164 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:10:41.804170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:10:41.804177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:10:41.804187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:10:41.804201 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:10:41.804210 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:10:41.804220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:10:41.804230 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:10:41.804241 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:10:41.804263 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:10:41.804270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:10:41.804276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:41.804285 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:10:41.804305 systemd-journald[215]: Collecting audit messages is disabled. Jul 11 00:10:41.804323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:10:41.804329 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:10:41.804338 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:10:41.804345 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:10:41.804352 kernel: Bridge firewalling registered Jul 11 00:10:41.804358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:10:41.804365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:41.804373 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:10:41.804380 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:10:41.804387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:10:41.804394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:10:41.804401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:10:41.804408 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:41.804415 systemd-journald[215]: Journal started Jul 11 00:10:41.804431 systemd-journald[215]: Runtime Journal (/run/log/journal/8be19981d7ad44bb9fa3689d66c4d0c7) is 4.8M, max 38.6M, 33.8M free. Jul 11 00:10:41.744289 systemd-modules-load[216]: Inserted module 'overlay' Jul 11 00:10:41.769055 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 11 00:10:41.806462 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:10:41.806732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:10:41.811346 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:10:41.813324 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:10:41.818209 dracut-cmdline[245]: dracut-dracut-053 Jul 11 00:10:41.819654 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:10:41.820049 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:10:41.826369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:10:41.843678 systemd-resolved[261]: Positive Trust Anchors: Jul 11 00:10:41.843689 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:10:41.843711 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:10:41.846301 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 11 00:10:41.847028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:10:41.847515 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:10:41.871278 kernel: SCSI subsystem initialized Jul 11 00:10:41.878262 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:10:41.886265 kernel: iscsi: registered transport (tcp) Jul 11 00:10:41.900613 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:10:41.900666 kernel: QLogic iSCSI HBA Driver Jul 11 00:10:41.921660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:10:41.930425 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:10:41.947051 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:10:41.947115 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:10:41.947125 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:10:41.979272 kernel: raid6: avx2x4 gen() 48754 MB/s Jul 11 00:10:41.996267 kernel: raid6: avx2x2 gen() 51514 MB/s Jul 11 00:10:42.013497 kernel: raid6: avx2x1 gen() 43524 MB/s Jul 11 00:10:42.013554 kernel: raid6: using algorithm avx2x2 gen() 51514 MB/s Jul 11 00:10:42.031517 kernel: raid6: .... xor() 30565 MB/s, rmw enabled Jul 11 00:10:42.031584 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:10:42.045264 kernel: xor: automatically using best checksumming function avx Jul 11 00:10:42.146262 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:10:42.151293 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:10:42.156330 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:10:42.163609 systemd-udevd[433]: Using default interface naming scheme 'v255'. Jul 11 00:10:42.166113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:10:42.171351 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:10:42.178018 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Jul 11 00:10:42.193573 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:10:42.201354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:10:42.274523 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:10:42.279906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:10:42.291144 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:10:42.292297 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:10:42.292669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:10:42.293016 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:10:42.298040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:10:42.306426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:10:42.347308 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 11 00:10:42.353079 kernel: vmw_pvscsi: using 64bit dma Jul 11 00:10:42.353115 kernel: vmw_pvscsi: max_id: 16 Jul 11 00:10:42.353124 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 11 00:10:42.357282 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 11 00:10:42.361590 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 11 00:10:42.361734 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:10:42.365969 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 11 00:10:42.366110 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 11 00:10:42.366120 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 11 00:10:42.366128 kernel: vmw_pvscsi: using MSI-X Jul 11 00:10:42.370302 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 11 00:10:42.372814 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 11 00:10:42.372946 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 11 00:10:42.382831 kernel: AVX2 version of gcm_enc/dec engaged. Jul 11 00:10:42.382871 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 11 00:10:42.381499 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:10:42.381574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:42.383105 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:10:42.383265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:10:42.383484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:42.383936 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:42.386257 kernel: libata version 3.00 loaded. Jul 11 00:10:42.388283 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 11 00:10:42.388392 kernel: AES CTR mode by8 optimization enabled Jul 11 00:10:42.388858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:42.389346 kernel: scsi host1: ata_piix Jul 11 00:10:42.390256 kernel: scsi host2: ata_piix Jul 11 00:10:42.392060 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 11 00:10:42.392077 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 11 00:10:42.402606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:42.411477 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:10:42.424174 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:42.561265 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 11 00:10:42.565259 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 11 00:10:42.579667 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 11 00:10:42.579840 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 11 00:10:42.579910 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 11 00:10:42.579992 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 11 00:10:42.580057 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 11 00:10:42.582027 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 11 00:10:42.582133 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:10:42.584745 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 11 00:10:42.584762 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 11 00:10:42.594330 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:10:42.623311 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (491) Jul 11 00:10:42.623560 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 11 00:10:42.629260 kernel: BTRFS: device fsid 54fb9359-b495-4b0c-b313-b0e2955e4a38 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (492) Jul 11 00:10:42.629770 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 11 00:10:42.632711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 11 00:10:42.634965 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 11 00:10:42.635107 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 11 00:10:42.639343 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:10:42.665263 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 11 00:10:42.671272 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 11 00:10:43.673299 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 11 00:10:43.673909 disk-uuid[588]: The operation has completed successfully. Jul 11 00:10:43.710927 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:10:43.710987 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:10:43.722361 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:10:43.724405 sh[605]: Success Jul 11 00:10:43.733261 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 11 00:10:43.774984 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:10:43.780035 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:10:43.780368 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:10:43.794636 kernel: BTRFS info (device dm-0): first mount of filesystem 54fb9359-b495-4b0c-b313-b0e2955e4a38 Jul 11 00:10:43.794664 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:10:43.794673 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:10:43.796501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:10:43.796514 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:10:43.804259 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 11 00:10:43.804872 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:10:43.814312 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 11 00:10:43.815395 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:10:43.832370 kernel: BTRFS info (device sda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:43.832402 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:10:43.834261 kernel: BTRFS info (device sda6): using free space tree Jul 11 00:10:43.838383 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 11 00:10:43.843500 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:10:43.845296 kernel: BTRFS info (device sda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:43.849630 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:10:43.856357 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:10:43.880051 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 11 00:10:43.885684 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:10:43.927204 ignition[665]: Ignition 2.19.0 Jul 11 00:10:43.927211 ignition[665]: Stage: fetch-offline Jul 11 00:10:43.927230 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:43.927239 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:43.929549 ignition[665]: parsed url from cmdline: "" Jul 11 00:10:43.929553 ignition[665]: no config URL provided Jul 11 00:10:43.929557 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:10:43.929564 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:10:43.929914 ignition[665]: config successfully fetched Jul 11 00:10:43.930117 ignition[665]: parsing config with SHA512: 7ca530a1a01ad213fb2da54a7ae179db590ebd24d662bc5374b0178f5a6bcec3ef5c0b93e61524971fb3c25b96f4d3d1bd28a7fb12d504d1df0d8d677ac8b301 Jul 11 00:10:43.932589 unknown[665]: fetched base config from "system" Jul 11 00:10:43.932712 unknown[665]: fetched user config from "vmware" Jul 11 00:10:43.933059 ignition[665]: fetch-offline: fetch-offline passed Jul 11 00:10:43.933221 ignition[665]: Ignition finished successfully Jul 11 00:10:43.933901 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:10:43.951970 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:10:43.955335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:10:43.967229 systemd-networkd[798]: lo: Link UP Jul 11 00:10:43.967235 systemd-networkd[798]: lo: Gained carrier Jul 11 00:10:43.968040 systemd-networkd[798]: Enumeration completed Jul 11 00:10:43.968201 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:10:43.968311 systemd-networkd[798]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 11 00:10:43.968433 systemd[1]: Reached target network.target - Network. Jul 11 00:10:43.968529 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:10:43.972206 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 11 00:10:43.972332 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 11 00:10:43.972085 systemd-networkd[798]: ens192: Link UP Jul 11 00:10:43.972087 systemd-networkd[798]: ens192: Gained carrier Jul 11 00:10:43.977066 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:10:43.984445 ignition[800]: Ignition 2.19.0 Jul 11 00:10:43.984451 ignition[800]: Stage: kargs Jul 11 00:10:43.984551 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:43.984557 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:43.985069 ignition[800]: kargs: kargs passed Jul 11 00:10:43.985092 ignition[800]: Ignition finished successfully Jul 11 00:10:43.986120 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:10:43.990414 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:10:43.997697 ignition[807]: Ignition 2.19.0 Jul 11 00:10:43.997703 ignition[807]: Stage: disks Jul 11 00:10:43.997802 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:43.997808 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:43.998379 ignition[807]: disks: disks passed Jul 11 00:10:43.998402 ignition[807]: Ignition finished successfully Jul 11 00:10:43.999084 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:10:43.999520 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:10:43.999802 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:10:44.000056 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:10:44.000290 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:10:44.000511 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:10:44.004446 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:10:44.015440 systemd-fsck[816]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 11 00:10:44.016411 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:10:44.020304 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:10:44.074991 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:10:44.075258 kernel: EXT4-fs (sda9): mounted filesystem 66ba5133-8c5a-461b-b2c1-a823c72af79b r/w with ordered data mode. Quota mode: none. Jul 11 00:10:44.075335 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:10:44.084349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:10:44.085627 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:10:44.085888 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:10:44.085951 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:10:44.085966 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:10:44.088962 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:10:44.089579 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:10:44.092261 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (824) Jul 11 00:10:44.094597 kernel: BTRFS info (device sda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:44.094615 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:10:44.094623 kernel: BTRFS info (device sda6): using free space tree Jul 11 00:10:44.099338 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 11 00:10:44.099940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:10:44.119017 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:10:44.121890 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:10:44.124021 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:10:44.126110 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:10:44.176878 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:10:44.184374 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:10:44.186731 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:10:44.190257 kernel: BTRFS info (device sda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:44.200990 ignition[937]: INFO : Ignition 2.19.0 Jul 11 00:10:44.201417 ignition[937]: INFO : Stage: mount Jul 11 00:10:44.201632 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:44.201781 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:44.202646 ignition[937]: INFO : mount: mount passed Jul 11 00:10:44.202646 ignition[937]: INFO : Ignition finished successfully Jul 11 00:10:44.203526 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:10:44.207358 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:10:44.207548 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:10:44.310967 systemd-resolved[261]: Detected conflict on linux IN A 139.178.70.105 Jul 11 00:10:44.310977 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jul 11 00:10:44.793189 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:10:44.799385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:10:44.840275 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Jul 11 00:10:44.842274 kernel: BTRFS info (device sda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:44.842309 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:10:44.844326 kernel: BTRFS info (device sda6): using free space tree Jul 11 00:10:44.848262 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 11 00:10:44.849302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:10:44.869091 ignition[966]: INFO : Ignition 2.19.0 Jul 11 00:10:44.869091 ignition[966]: INFO : Stage: files Jul 11 00:10:44.869091 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:44.869091 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:44.869091 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:10:44.869876 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:10:44.870023 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:10:44.872348 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:10:44.872561 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:10:44.872891 unknown[966]: wrote ssh authorized keys file for user: core Jul 11 00:10:44.873104 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:10:44.874743 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:10:44.874920 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 11 00:10:44.921983 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:10:45.056794 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:10:45.056794 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 11 00:10:45.591195 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:10:45.842339 systemd-networkd[798]: ens192: Gained IPv6LL Jul 11 00:10:46.718764 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:10:46.719203 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 11 00:10:46.719203 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 00:10:46.720879 ignition[966]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:10:46.758577 ignition[966]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:10:46.761268 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:10:46.761268 ignition[966]: INFO : files: files passed Jul 11 00:10:46.761268 ignition[966]: INFO : Ignition finished successfully Jul 11 00:10:46.762384 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:10:46.767372 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:10:46.769024 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:10:46.774824 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:10:46.774882 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:10:46.782336 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:10:46.782336 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:10:46.783112 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:10:46.784359 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:10:46.785013 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:10:46.788394 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:10:46.812058 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:10:46.812147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:10:46.812479 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:10:46.812615 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:10:46.812838 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:10:46.813534 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:10:46.824403 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:10:46.829458 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:10:46.837570 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:10:46.837818 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:10:46.838154 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:10:46.838436 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:10:46.838565 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:10:46.838986 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:10:46.839197 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:10:46.839398 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:10:46.839591 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:10:46.840039 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:10:46.840267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:10:46.840502 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:10:46.840729 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:10:46.840944 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:10:46.841145 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:10:46.841321 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:10:46.841434 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:10:46.841796 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:10:46.842003 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:10:46.842179 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:10:46.842258 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:10:46.842526 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:10:46.842621 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:10:46.842886 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:10:46.842990 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:10:46.843361 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:10:46.843524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:10:46.847311 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:10:46.847703 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:10:46.847963 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:10:46.848151 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:10:46.848241 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:10:46.848505 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:10:46.848580 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:10:46.848852 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:10:46.848954 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:10:46.849221 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:10:46.849328 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:10:46.853425 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:10:46.855495 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:10:46.855689 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:10:46.855835 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:10:46.856170 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:10:46.856355 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:10:46.861536 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:10:46.861633 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:10:46.865275 ignition[1020]: INFO : Ignition 2.19.0 Jul 11 00:10:46.865275 ignition[1020]: INFO : Stage: umount Jul 11 00:10:46.865275 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:46.865275 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:46.866425 ignition[1020]: INFO : umount: umount passed Jul 11 00:10:46.866610 ignition[1020]: INFO : Ignition finished successfully Jul 11 00:10:46.867380 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:10:46.867609 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:10:46.868194 systemd[1]: Stopped target network.target - Network. Jul 11 00:10:46.868568 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:10:46.868750 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:10:46.868881 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:10:46.868907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:10:46.869021 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:10:46.869044 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:10:46.869153 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:10:46.869174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:10:46.870214 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:10:46.870387 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:10:46.874134 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:10:46.874219 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:10:46.874812 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:10:46.874841 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:10:46.878616 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:10:46.878723 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:10:46.878759 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:10:46.878911 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 11 00:10:46.878935 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 11 00:10:46.879110 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:10:46.879889 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:10:46.883731 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:10:46.883973 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:10:46.885748 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:10:46.886085 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:10:46.886414 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:10:46.886441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:10:46.886860 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:10:46.886885 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:10:46.887546 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:10:46.887625 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:10:46.888539 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:10:46.888577 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:10:46.888711 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:10:46.888730 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:10:46.888841 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:10:46.888867 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:10:46.889035 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:10:46.889056 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:10:46.889197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:10:46.889221 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:46.893428 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:10:46.893780 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:10:46.893820 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:10:46.893962 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 00:10:46.893990 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:10:46.894110 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:10:46.894133 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:10:46.894257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:10:46.894280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:46.894581 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:10:46.894664 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:10:46.897423 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:10:46.897650 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:10:46.944199 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:10:46.944279 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:10:46.944792 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:10:46.944913 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:10:46.944948 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:10:46.947430 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:10:46.952785 systemd[1]: Switching root. Jul 11 00:10:46.990455 systemd-journald[215]: Journal stopped Jul 11 00:10:41.751355 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:46:23 -00 2025 Jul 11 00:10:41.751373 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:10:41.751379 kernel: Disabled fast string operations Jul 11 00:10:41.751383 kernel: BIOS-provided physical RAM map: Jul 11 00:10:41.751387 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 11 00:10:41.751391 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 11 00:10:41.751397 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 11 00:10:41.751402 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 11 00:10:41.751406 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 11 00:10:41.751410 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 11 00:10:41.751414 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 11 00:10:41.751418 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 11 00:10:41.751422 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 11 00:10:41.751427 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 11 00:10:41.751433 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 11 00:10:41.751438 kernel: NX (Execute Disable) protection: active Jul 11 00:10:41.751443 kernel: APIC: Static calls initialized Jul 11 00:10:41.751448 kernel: SMBIOS 2.7 present. Jul 11 00:10:41.751453 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 11 00:10:41.751457 kernel: vmware: hypercall mode: 0x00 Jul 11 00:10:41.751462 kernel: Hypervisor detected: VMware Jul 11 00:10:41.751467 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 11 00:10:41.751473 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 11 00:10:41.751477 kernel: vmware: using clock offset of 2777504175 ns Jul 11 00:10:41.751483 kernel: tsc: Detected 3408.000 MHz processor Jul 11 00:10:41.751488 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:10:41.751493 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:10:41.751498 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 11 00:10:41.751503 kernel: total RAM covered: 3072M Jul 11 00:10:41.751508 kernel: Found optimal setting for mtrr clean up Jul 11 00:10:41.751515 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 11 00:10:41.751521 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 11 00:10:41.751526 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:10:41.751531 kernel: Using GB pages for direct mapping Jul 11 00:10:41.751536 kernel: ACPI: Early table checksum verification disabled Jul 11 00:10:41.751541 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 11 00:10:41.751546 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 11 00:10:41.751551 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 11 00:10:41.751556 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 11 00:10:41.751561 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 11 00:10:41.751568 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 11 00:10:41.751573 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 11 00:10:41.751578 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 11 00:10:41.751584 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 11 00:10:41.751589 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 11 00:10:41.751595 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 11 00:10:41.751600 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 11 00:10:41.751605 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 11 00:10:41.751611 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 11 00:10:41.751616 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 11 00:10:41.751621 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 11 00:10:41.751626 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 11 00:10:41.751631 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 11 00:10:41.751636 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 11 00:10:41.751641 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 11 00:10:41.751648 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 11 00:10:41.751653 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 11 00:10:41.751658 kernel: system APIC only can use physical flat Jul 11 00:10:41.751663 kernel: APIC: Switched APIC routing to: physical flat Jul 11 00:10:41.751668 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 11 00:10:41.751673 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 11 00:10:41.751678 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 11 00:10:41.751683 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 11 00:10:41.751688 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 11 00:10:41.751694 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 11 00:10:41.751699 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 11 00:10:41.751704 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 11 00:10:41.751709 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 11 00:10:41.751714 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 11 00:10:41.751719 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 11 00:10:41.751724 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 11 00:10:41.751728 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 11 00:10:41.751733 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 11 00:10:41.751738 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 11 00:10:41.751743 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 11 00:10:41.751749 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 11 00:10:41.751755 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 11 00:10:41.751760 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 11 00:10:41.751765 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 11 00:10:41.751770 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 11 00:10:41.751775 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 11 00:10:41.751780 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 11 00:10:41.751785 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 11 00:10:41.751791 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 11 00:10:41.751799 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 11 00:10:41.751809 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 11 00:10:41.751814 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 11 00:10:41.751819 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 11 00:10:41.751824 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 11 00:10:41.751829 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 11 00:10:41.751836 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 11 00:10:41.751841 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 11 00:10:41.751846 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 11 00:10:41.751851 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 11 00:10:41.751856 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 11 00:10:41.751862 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 11 00:10:41.751867 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 11 00:10:41.751873 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 11 00:10:41.751878 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 11 00:10:41.751883 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 11 00:10:41.751888 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 11 00:10:41.751893 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 11 00:10:41.751898 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 11 00:10:41.751903 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 11 00:10:41.751908 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 11 00:10:41.751913 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 11 00:10:41.751919 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 11 00:10:41.751924 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 11 00:10:41.751929 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 11 00:10:41.751934 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 11 00:10:41.751939 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 11 00:10:41.751944 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 11 00:10:41.751949 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 11 00:10:41.751954 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 11 00:10:41.751959 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 11 00:10:41.751965 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 11 00:10:41.751970 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 11 00:10:41.751975 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 11 00:10:41.751984 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 11 00:10:41.751990 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 11 00:10:41.751995 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 11 00:10:41.752001 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 11 00:10:41.752006 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 11 00:10:41.752011 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 11 00:10:41.752018 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 11 00:10:41.752023 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 11 00:10:41.752028 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 11 00:10:41.752034 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 11 00:10:41.752039 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 11 00:10:41.752044 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 11 00:10:41.752050 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 11 00:10:41.752055 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 11 00:10:41.752061 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 11 00:10:41.752066 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 11 00:10:41.752073 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 11 00:10:41.752078 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 11 00:10:41.752084 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 11 00:10:41.752090 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 11 00:10:41.752095 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 11 00:10:41.752101 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 11 00:10:41.752106 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 11 00:10:41.752112 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 11 00:10:41.752118 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 11 00:10:41.752124 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 11 00:10:41.752131 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 11 00:10:41.752136 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 11 00:10:41.752142 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 11 00:10:41.752147 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 11 00:10:41.752152 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 11 00:10:41.752158 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 11 00:10:41.752163 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 11 00:10:41.752168 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 11 00:10:41.752174 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 11 00:10:41.752179 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 11 00:10:41.752184 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 11 00:10:41.752191 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 11 00:10:41.752197 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 11 00:10:41.752202 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 11 00:10:41.752207 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 11 00:10:41.752213 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 11 00:10:41.752218 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 11 00:10:41.752223 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 11 00:10:41.752229 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 11 00:10:41.752234 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 11 00:10:41.752239 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 11 00:10:41.754410 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 11 00:10:41.754423 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 11 00:10:41.754429 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 11 00:10:41.754434 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 11 00:10:41.754440 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 11 00:10:41.754445 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 11 00:10:41.754450 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 11 00:10:41.754456 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 11 00:10:41.754461 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 11 00:10:41.754466 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 11 00:10:41.754476 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 11 00:10:41.754481 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 11 00:10:41.754486 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 11 00:10:41.754492 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 11 00:10:41.754497 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 11 00:10:41.754503 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 11 00:10:41.754508 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 11 00:10:41.754513 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 11 00:10:41.754519 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 11 00:10:41.754524 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 11 00:10:41.754531 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 11 00:10:41.754536 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 11 00:10:41.754542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 11 00:10:41.754547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 11 00:10:41.754553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 11 00:10:41.754559 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 11 00:10:41.754564 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 11 00:10:41.754570 kernel: Zone ranges: Jul 11 00:10:41.754576 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:10:41.754583 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 11 00:10:41.754589 kernel: Normal empty Jul 11 00:10:41.754594 kernel: Movable zone start for each node Jul 11 00:10:41.754600 kernel: Early memory node ranges Jul 11 00:10:41.754605 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 11 00:10:41.754610 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 11 00:10:41.754616 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 11 00:10:41.754622 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 11 00:10:41.754631 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:10:41.754637 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 11 00:10:41.754657 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 11 00:10:41.754664 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 11 00:10:41.754670 kernel: system APIC only can use physical flat Jul 11 00:10:41.754675 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 11 00:10:41.754681 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 11 00:10:41.754687 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 11 00:10:41.754692 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 11 00:10:41.754698 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 11 00:10:41.754703 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 11 00:10:41.754708 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 11 00:10:41.754716 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 11 00:10:41.754722 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 11 00:10:41.754728 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 11 00:10:41.754733 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 11 00:10:41.754739 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 11 00:10:41.754744 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 11 00:10:41.754749 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 11 00:10:41.754755 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 11 00:10:41.754761 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 11 00:10:41.754767 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 11 00:10:41.754779 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 11 00:10:41.754785 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 11 00:10:41.754790 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 11 00:10:41.754796 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 11 00:10:41.754801 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 11 00:10:41.754806 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 11 00:10:41.754812 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 11 00:10:41.754817 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 11 00:10:41.754823 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 11 00:10:41.754830 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 11 00:10:41.754836 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 11 00:10:41.754841 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 11 00:10:41.754846 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 11 00:10:41.754852 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 11 00:10:41.754857 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 11 00:10:41.754863 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 11 00:10:41.754868 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 11 00:10:41.754873 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 11 00:10:41.754880 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 11 00:10:41.754885 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 11 00:10:41.754891 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 11 00:10:41.754896 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 11 00:10:41.754905 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 11 00:10:41.754911 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 11 00:10:41.754916 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 11 00:10:41.754922 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 11 00:10:41.754927 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 11 00:10:41.754936 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 11 00:10:41.754943 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 11 00:10:41.754949 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 11 00:10:41.754954 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 11 00:10:41.754959 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 11 00:10:41.754965 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 11 00:10:41.754970 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 11 00:10:41.754976 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 11 00:10:41.754981 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 11 00:10:41.754987 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 11 00:10:41.754992 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 11 00:10:41.754998 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 11 00:10:41.755004 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 11 00:10:41.755009 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 11 00:10:41.755015 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 11 00:10:41.755020 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 11 00:10:41.755025 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 11 00:10:41.755031 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 11 00:10:41.755036 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 11 00:10:41.755042 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 11 00:10:41.755048 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 11 00:10:41.755054 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 11 00:10:41.755059 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 11 00:10:41.755064 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 11 00:10:41.755070 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 11 00:10:41.755075 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 11 00:10:41.755080 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 11 00:10:41.755086 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 11 00:10:41.755091 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 11 00:10:41.755097 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 11 00:10:41.755103 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 11 00:10:41.755108 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 11 00:10:41.755114 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 11 00:10:41.755119 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 11 00:10:41.755125 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 11 00:10:41.755130 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 11 00:10:41.755136 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 11 00:10:41.755141 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 11 00:10:41.755147 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 11 00:10:41.755152 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 11 00:10:41.755159 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 11 00:10:41.755164 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 11 00:10:41.755170 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 11 00:10:41.755175 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 11 00:10:41.755181 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 11 00:10:41.755186 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 11 00:10:41.755191 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 11 00:10:41.755197 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 11 00:10:41.755202 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 11 00:10:41.755209 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 11 00:10:41.755214 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 11 00:10:41.755220 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 11 00:10:41.755226 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 11 00:10:41.755231 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 11 00:10:41.755236 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 11 00:10:41.755242 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 11 00:10:41.756317 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 11 00:10:41.756327 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 11 00:10:41.756333 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 11 00:10:41.756341 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 11 00:10:41.756347 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 11 00:10:41.756352 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 11 00:10:41.756358 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 11 00:10:41.756363 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 11 00:10:41.756368 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 11 00:10:41.756374 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 11 00:10:41.756379 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 11 00:10:41.756385 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 11 00:10:41.756391 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 11 00:10:41.756397 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 11 00:10:41.756403 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 11 00:10:41.756408 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 11 00:10:41.756414 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 11 00:10:41.756419 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 11 00:10:41.756425 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 11 00:10:41.756431 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 11 00:10:41.756436 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 11 00:10:41.756442 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 11 00:10:41.756448 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 11 00:10:41.756454 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 11 00:10:41.756459 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 11 00:10:41.756464 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 11 00:10:41.756470 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 11 00:10:41.756475 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 11 00:10:41.756480 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:10:41.756486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 11 00:10:41.756492 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:10:41.756497 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 11 00:10:41.756504 kernel: TSC deadline timer available Jul 11 00:10:41.756510 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 11 00:10:41.756515 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 11 00:10:41.756521 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 11 00:10:41.756526 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:10:41.756532 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 11 00:10:41.756538 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 11 00:10:41.756543 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 11 00:10:41.756549 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 11 00:10:41.756567 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 11 00:10:41.756574 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 11 00:10:41.756580 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 11 00:10:41.756591 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 11 00:10:41.756606 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 11 00:10:41.756613 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 11 00:10:41.756619 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 11 00:10:41.756625 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 11 00:10:41.756631 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 11 00:10:41.756637 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 11 00:10:41.756643 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 11 00:10:41.756649 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 11 00:10:41.756655 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 11 00:10:41.756660 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 11 00:10:41.756666 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 11 00:10:41.756673 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:10:41.756679 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:10:41.756686 kernel: random: crng init done Jul 11 00:10:41.756692 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 11 00:10:41.756698 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 11 00:10:41.756704 kernel: printk: log_buf_len min size: 262144 bytes Jul 11 00:10:41.756710 kernel: printk: log_buf_len: 1048576 bytes Jul 11 00:10:41.756715 kernel: printk: early log buf free: 239648(91%) Jul 11 00:10:41.756721 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:10:41.756727 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 11 00:10:41.756733 kernel: Fallback order for Node 0: 0 Jul 11 00:10:41.756740 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 11 00:10:41.756745 kernel: Policy zone: DMA32 Jul 11 00:10:41.756751 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:10:41.756758 kernel: Memory: 1936348K/2096628K available (12288K kernel code, 2295K rwdata, 22744K rodata, 42872K init, 2320K bss, 160020K reserved, 0K cma-reserved) Jul 11 00:10:41.756765 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 11 00:10:41.756772 kernel: ftrace: allocating 37966 entries in 149 pages Jul 11 00:10:41.756778 kernel: ftrace: allocated 149 pages with 4 groups Jul 11 00:10:41.756784 kernel: Dynamic Preempt: voluntary Jul 11 00:10:41.756789 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:10:41.756796 kernel: rcu: RCU event tracing is enabled. Jul 11 00:10:41.756802 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 11 00:10:41.756808 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:10:41.756814 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:10:41.756819 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:10:41.756825 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:10:41.756832 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 11 00:10:41.756838 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 11 00:10:41.756844 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 11 00:10:41.756850 kernel: Console: colour VGA+ 80x25 Jul 11 00:10:41.756856 kernel: printk: console [tty0] enabled Jul 11 00:10:41.756862 kernel: printk: console [ttyS0] enabled Jul 11 00:10:41.756867 kernel: ACPI: Core revision 20230628 Jul 11 00:10:41.756873 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 11 00:10:41.756880 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:10:41.756887 kernel: x2apic enabled Jul 11 00:10:41.756893 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:10:41.756899 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:10:41.756905 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 11 00:10:41.756911 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 11 00:10:41.756918 kernel: Disabled fast string operations Jul 11 00:10:41.756924 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 11 00:10:41.756930 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 11 00:10:41.756935 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:10:41.756943 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 11 00:10:41.756949 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 11 00:10:41.756954 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 11 00:10:41.756960 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 11 00:10:41.756966 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 11 00:10:41.756972 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:10:41.756978 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:10:41.756984 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 11 00:10:41.756990 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 11 00:10:41.756997 kernel: GDS: Unknown: Dependent on hypervisor status Jul 11 00:10:41.757002 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 11 00:10:41.757008 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:10:41.757014 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:10:41.757020 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:10:41.757026 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:10:41.757032 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:10:41.757037 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:10:41.757043 kernel: pid_max: default: 131072 minimum: 1024 Jul 11 00:10:41.757050 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:10:41.757056 kernel: landlock: Up and running. Jul 11 00:10:41.757062 kernel: SELinux: Initializing. Jul 11 00:10:41.757068 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 11 00:10:41.757073 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 11 00:10:41.757079 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 11 00:10:41.757085 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 11 00:10:41.757091 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 11 00:10:41.757098 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 11 00:10:41.757104 kernel: Performance Events: Skylake events, core PMU driver. Jul 11 00:10:41.757110 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 11 00:10:41.757116 kernel: core: CPUID marked event: 'instructions' unavailable Jul 11 00:10:41.757122 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 11 00:10:41.757127 kernel: core: CPUID marked event: 'cache references' unavailable Jul 11 00:10:41.757133 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 11 00:10:41.757138 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 11 00:10:41.757145 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 11 00:10:41.757151 kernel: ... version: 1 Jul 11 00:10:41.757158 kernel: ... bit width: 48 Jul 11 00:10:41.757163 kernel: ... generic registers: 4 Jul 11 00:10:41.757169 kernel: ... value mask: 0000ffffffffffff Jul 11 00:10:41.757175 kernel: ... max period: 000000007fffffff Jul 11 00:10:41.757181 kernel: ... fixed-purpose events: 0 Jul 11 00:10:41.757187 kernel: ... event mask: 000000000000000f Jul 11 00:10:41.757192 kernel: signal: max sigframe size: 1776 Jul 11 00:10:41.757198 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:10:41.757205 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:10:41.757211 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 11 00:10:41.757217 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:10:41.757223 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:10:41.757229 kernel: .... node #0, CPUs: #1 Jul 11 00:10:41.757235 kernel: Disabled fast string operations Jul 11 00:10:41.757240 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 11 00:10:41.758262 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 11 00:10:41.758270 kernel: smp: Brought up 1 node, 2 CPUs Jul 11 00:10:41.758277 kernel: smpboot: Max logical packages: 128 Jul 11 00:10:41.758285 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 11 00:10:41.758293 kernel: devtmpfs: initialized Jul 11 00:10:41.758299 kernel: x86/mm: Memory block size: 128MB Jul 11 00:10:41.758305 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 11 00:10:41.758311 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:10:41.758317 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 11 00:10:41.758323 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:10:41.758329 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:10:41.758335 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:10:41.758342 kernel: audit: type=2000 audit(1752192639.087:1): state=initialized audit_enabled=0 res=1 Jul 11 00:10:41.758348 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:10:41.758354 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:10:41.758360 kernel: cpuidle: using governor menu Jul 11 00:10:41.758366 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 11 00:10:41.758372 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:10:41.758378 kernel: dca service started, version 1.12.1 Jul 11 00:10:41.758384 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 11 00:10:41.758390 kernel: PCI: Using configuration type 1 for base access Jul 11 00:10:41.758397 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:10:41.758403 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:10:41.758409 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:10:41.758415 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:10:41.758420 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:10:41.758427 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:10:41.758432 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:10:41.758439 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:10:41.758444 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:10:41.758451 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 11 00:10:41.758457 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 11 00:10:41.758463 kernel: ACPI: Interpreter enabled Jul 11 00:10:41.758469 kernel: ACPI: PM: (supports S0 S1 S5) Jul 11 00:10:41.758475 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:10:41.758481 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:10:41.758487 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:10:41.758493 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 11 00:10:41.758499 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 11 00:10:41.758594 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:10:41.758653 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 11 00:10:41.758705 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 11 00:10:41.758713 kernel: PCI host bridge to bus 0000:00 Jul 11 00:10:41.758766 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:10:41.758813 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 11 00:10:41.758862 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:10:41.758908 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:10:41.758964 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 11 00:10:41.759011 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 11 00:10:41.759073 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 11 00:10:41.759133 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 11 00:10:41.759194 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 11 00:10:41.760019 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 11 00:10:41.760086 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 11 00:10:41.760143 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 11 00:10:41.760197 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 11 00:10:41.760289 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 11 00:10:41.760349 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 11 00:10:41.760410 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 11 00:10:41.760463 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 11 00:10:41.760515 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 11 00:10:41.760572 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 11 00:10:41.760625 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 11 00:10:41.760676 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 11 00:10:41.760734 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 11 00:10:41.760787 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 11 00:10:41.760838 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 11 00:10:41.760889 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 11 00:10:41.760940 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 11 00:10:41.760991 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:10:41.761047 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 11 00:10:41.761111 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.761163 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.761220 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.763961 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764030 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764088 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764149 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764203 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764328 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764384 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764440 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764493 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764550 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764605 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764661 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764714 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764770 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764824 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764883 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.764941 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.764998 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765050 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765108 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765161 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765219 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765283 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765339 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765392 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765448 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765501 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765561 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765614 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765670 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765722 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765779 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765832 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.765889 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.765963 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.766023 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.766077 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.766133 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.766187 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.766243 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.768606 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.768666 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.768720 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.768777 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.768830 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.768886 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.768943 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770329 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770402 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770463 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770516 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770573 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770629 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770685 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770737 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770793 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770844 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.770899 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.770954 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.771008 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 11 00:10:41.771059 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.771117 kernel: pci_bus 0000:01: extended config space not accessible Jul 11 00:10:41.771171 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 11 00:10:41.771223 kernel: pci_bus 0000:02: extended config space not accessible Jul 11 00:10:41.771233 kernel: acpiphp: Slot [32] registered Jul 11 00:10:41.771241 kernel: acpiphp: Slot [33] registered Jul 11 00:10:41.771294 kernel: acpiphp: Slot [34] registered Jul 11 00:10:41.771302 kernel: acpiphp: Slot [35] registered Jul 11 00:10:41.771307 kernel: acpiphp: Slot [36] registered Jul 11 00:10:41.771313 kernel: acpiphp: Slot [37] registered Jul 11 00:10:41.771319 kernel: acpiphp: Slot [38] registered Jul 11 00:10:41.771325 kernel: acpiphp: Slot [39] registered Jul 11 00:10:41.771331 kernel: acpiphp: Slot [40] registered Jul 11 00:10:41.771337 kernel: acpiphp: Slot [41] registered Jul 11 00:10:41.771345 kernel: acpiphp: Slot [42] registered Jul 11 00:10:41.771351 kernel: acpiphp: Slot [43] registered Jul 11 00:10:41.771356 kernel: acpiphp: Slot [44] registered Jul 11 00:10:41.771362 kernel: acpiphp: Slot [45] registered Jul 11 00:10:41.771368 kernel: acpiphp: Slot [46] registered Jul 11 00:10:41.771374 kernel: acpiphp: Slot [47] registered Jul 11 00:10:41.771379 kernel: acpiphp: Slot [48] registered Jul 11 00:10:41.771385 kernel: acpiphp: Slot [49] registered Jul 11 00:10:41.771391 kernel: acpiphp: Slot [50] registered Jul 11 00:10:41.771397 kernel: acpiphp: Slot [51] registered Jul 11 00:10:41.771404 kernel: acpiphp: Slot [52] registered Jul 11 00:10:41.771410 kernel: acpiphp: Slot [53] registered Jul 11 00:10:41.771416 kernel: acpiphp: Slot [54] registered Jul 11 00:10:41.771421 kernel: acpiphp: Slot [55] registered Jul 11 00:10:41.771427 kernel: acpiphp: Slot [56] registered Jul 11 00:10:41.771433 kernel: acpiphp: Slot [57] registered Jul 11 00:10:41.771439 kernel: acpiphp: Slot [58] registered Jul 11 00:10:41.771445 kernel: acpiphp: Slot [59] registered Jul 11 00:10:41.771451 kernel: acpiphp: Slot [60] registered Jul 11 00:10:41.771457 kernel: acpiphp: Slot [61] registered Jul 11 00:10:41.771463 kernel: acpiphp: Slot [62] registered Jul 11 00:10:41.771469 kernel: acpiphp: Slot [63] registered Jul 11 00:10:41.771528 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 11 00:10:41.771580 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 11 00:10:41.771631 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 11 00:10:41.771682 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 11 00:10:41.771732 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 11 00:10:41.771785 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 11 00:10:41.771836 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 11 00:10:41.771887 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 11 00:10:41.771939 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 11 00:10:41.771997 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 11 00:10:41.772053 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 11 00:10:41.772106 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 11 00:10:41.772161 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 11 00:10:41.772214 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 11 00:10:41.772275 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 11 00:10:41.772329 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 11 00:10:41.772381 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 11 00:10:41.772433 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 11 00:10:41.772486 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 11 00:10:41.772538 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 11 00:10:41.772592 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 11 00:10:41.772643 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 11 00:10:41.772697 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 11 00:10:41.772750 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 11 00:10:41.772801 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 11 00:10:41.772852 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 11 00:10:41.772906 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 11 00:10:41.772965 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 11 00:10:41.773016 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 11 00:10:41.773069 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 11 00:10:41.773121 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 11 00:10:41.773173 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 11 00:10:41.773228 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 11 00:10:41.773330 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 11 00:10:41.773383 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 11 00:10:41.773436 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 11 00:10:41.773486 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 11 00:10:41.773536 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 11 00:10:41.773588 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 11 00:10:41.773639 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 11 00:10:41.773692 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 11 00:10:41.773750 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 11 00:10:41.773803 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 11 00:10:41.773855 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 11 00:10:41.773907 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 11 00:10:41.773961 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 11 00:10:41.774014 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 11 00:10:41.774069 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 11 00:10:41.774123 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 11 00:10:41.774175 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 11 00:10:41.774227 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 11 00:10:41.774292 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 11 00:10:41.774347 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 11 00:10:41.774401 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 11 00:10:41.774452 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 11 00:10:41.774508 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 11 00:10:41.774558 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 11 00:10:41.774613 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 11 00:10:41.774664 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 11 00:10:41.774715 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 11 00:10:41.774765 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 11 00:10:41.774819 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 11 00:10:41.774870 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 11 00:10:41.774924 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 11 00:10:41.774976 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 11 00:10:41.775028 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 11 00:10:41.775080 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 11 00:10:41.775133 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 11 00:10:41.775184 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 11 00:10:41.775236 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 11 00:10:41.775335 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 11 00:10:41.775391 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 11 00:10:41.775443 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 11 00:10:41.775496 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 11 00:10:41.775547 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 11 00:10:41.775598 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 11 00:10:41.775650 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 11 00:10:41.775701 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 11 00:10:41.775753 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 11 00:10:41.775807 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 11 00:10:41.775861 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 11 00:10:41.775912 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 11 00:10:41.775971 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 11 00:10:41.776023 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 11 00:10:41.776075 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 11 00:10:41.776126 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 11 00:10:41.776179 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 11 00:10:41.776231 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 11 00:10:41.776296 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 11 00:10:41.776349 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 11 00:10:41.776400 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 11 00:10:41.776454 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 11 00:10:41.776505 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 11 00:10:41.776556 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 11 00:10:41.776612 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 11 00:10:41.776663 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 11 00:10:41.776715 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 11 00:10:41.776768 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 11 00:10:41.776819 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 11 00:10:41.776871 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 11 00:10:41.776923 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 11 00:10:41.776974 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 11 00:10:41.777029 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 11 00:10:41.777082 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 11 00:10:41.777134 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 11 00:10:41.777185 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 11 00:10:41.777237 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 11 00:10:41.777304 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 11 00:10:41.777357 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 11 00:10:41.777409 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 11 00:10:41.777464 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 11 00:10:41.777517 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 11 00:10:41.777569 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 11 00:10:41.777620 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 11 00:10:41.777672 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 11 00:10:41.777724 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 11 00:10:41.777775 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 11 00:10:41.777830 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 11 00:10:41.777881 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 11 00:10:41.777932 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 11 00:10:41.777985 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 11 00:10:41.778036 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 11 00:10:41.778087 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 11 00:10:41.778140 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 11 00:10:41.778192 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 11 00:10:41.778251 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 11 00:10:41.778305 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 11 00:10:41.778356 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 11 00:10:41.778408 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 11 00:10:41.778417 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 11 00:10:41.778423 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 11 00:10:41.778429 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 11 00:10:41.778435 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:10:41.778441 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 11 00:10:41.778449 kernel: iommu: Default domain type: Translated Jul 11 00:10:41.778455 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:10:41.778461 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:10:41.778467 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:10:41.778473 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 11 00:10:41.778479 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 11 00:10:41.778531 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 11 00:10:41.778583 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 11 00:10:41.778634 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:10:41.778645 kernel: vgaarb: loaded Jul 11 00:10:41.778652 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 11 00:10:41.778658 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 11 00:10:41.778664 kernel: clocksource: Switched to clocksource tsc-early Jul 11 00:10:41.778670 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:10:41.778676 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:10:41.778682 kernel: pnp: PnP ACPI init Jul 11 00:10:41.778736 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 11 00:10:41.778788 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 11 00:10:41.778835 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 11 00:10:41.778886 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 11 00:10:41.778950 kernel: pnp 00:06: [dma 2] Jul 11 00:10:41.779003 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 11 00:10:41.779051 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 11 00:10:41.779098 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 11 00:10:41.779109 kernel: pnp: PnP ACPI: found 8 devices Jul 11 00:10:41.779115 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:10:41.779121 kernel: NET: Registered PF_INET protocol family Jul 11 00:10:41.779127 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:10:41.779133 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 11 00:10:41.779139 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:10:41.779145 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 11 00:10:41.779151 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 11 00:10:41.779158 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 11 00:10:41.779164 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 11 00:10:41.779170 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 11 00:10:41.779176 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:10:41.779182 kernel: NET: Registered PF_XDP protocol family Jul 11 00:10:41.779235 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 11 00:10:41.779403 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 11 00:10:41.779456 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 11 00:10:41.779511 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 11 00:10:41.779563 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 11 00:10:41.779614 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 11 00:10:41.779665 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 11 00:10:41.779717 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 11 00:10:41.779768 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 11 00:10:41.779822 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 11 00:10:41.779874 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 11 00:10:41.779926 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 11 00:10:41.779977 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 11 00:10:41.780028 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 11 00:10:41.780081 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 11 00:10:41.780132 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 11 00:10:41.780183 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 11 00:10:41.780235 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 11 00:10:41.780294 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 11 00:10:41.780345 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 11 00:10:41.780400 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 11 00:10:41.780451 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 11 00:10:41.780503 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 11 00:10:41.780554 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 11 00:10:41.780605 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 11 00:10:41.780657 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.780708 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.780762 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.780813 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.780865 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.780916 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.780968 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781018 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781070 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781121 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781175 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781226 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781291 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781343 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781394 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781445 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781496 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781548 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781603 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781654 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781706 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781757 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781808 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781859 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.781909 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.781965 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782019 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782070 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782121 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782172 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782223 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782285 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782337 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782387 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782441 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782492 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782543 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782594 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782645 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782695 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782747 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782797 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782852 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.782903 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.782953 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783004 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783055 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783106 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783156 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783207 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783269 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783323 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783379 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783429 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783480 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783531 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783582 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783633 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783684 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783735 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783786 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783845 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783896 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.783946 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.783998 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784049 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784100 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784151 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784201 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784260 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784313 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784367 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784419 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784470 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784521 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784573 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784624 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784675 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784725 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784776 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784831 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784883 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.784947 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 11 00:10:41.784999 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 11 00:10:41.785052 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 11 00:10:41.785105 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 11 00:10:41.785157 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 11 00:10:41.785208 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 11 00:10:41.785365 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 11 00:10:41.785426 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 11 00:10:41.785480 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 11 00:10:41.785531 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 11 00:10:41.785583 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 11 00:10:41.785635 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 11 00:10:41.785687 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 11 00:10:41.785739 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 11 00:10:41.785790 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 11 00:10:41.785841 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 11 00:10:41.785897 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 11 00:10:41.785960 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 11 00:10:41.786013 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 11 00:10:41.786064 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 11 00:10:41.786115 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 11 00:10:41.786166 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 11 00:10:41.786216 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 11 00:10:41.786324 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 11 00:10:41.786377 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 11 00:10:41.786431 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 11 00:10:41.786485 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 11 00:10:41.786536 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 11 00:10:41.786587 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 11 00:10:41.786637 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 11 00:10:41.786689 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 11 00:10:41.786743 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 11 00:10:41.786794 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 11 00:10:41.786844 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 11 00:10:41.786896 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 11 00:10:41.786952 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 11 00:10:41.787005 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 11 00:10:41.787056 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 11 00:10:41.787107 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 11 00:10:41.787159 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 11 00:10:41.787214 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 11 00:10:41.787272 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 11 00:10:41.787324 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 11 00:10:41.787377 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 11 00:10:41.787430 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 11 00:10:41.787482 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 11 00:10:41.787533 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 11 00:10:41.787585 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 11 00:10:41.787636 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 11 00:10:41.787690 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 11 00:10:41.787741 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 11 00:10:41.787791 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 11 00:10:41.787842 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 11 00:10:41.787893 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 11 00:10:41.787944 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 11 00:10:41.787995 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 11 00:10:41.788045 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 11 00:10:41.788097 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 11 00:10:41.788147 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 11 00:10:41.788201 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 11 00:10:41.789293 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 11 00:10:41.789361 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 11 00:10:41.789418 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 11 00:10:41.789473 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 11 00:10:41.789526 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 11 00:10:41.789577 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 11 00:10:41.789629 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 11 00:10:41.789682 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 11 00:10:41.789738 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 11 00:10:41.789790 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 11 00:10:41.789840 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 11 00:10:41.789893 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 11 00:10:41.789945 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 11 00:10:41.789996 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 11 00:10:41.790047 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 11 00:10:41.790100 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 11 00:10:41.790152 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 11 00:10:41.790203 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 11 00:10:41.790275 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 11 00:10:41.790328 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 11 00:10:41.790379 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 11 00:10:41.790431 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 11 00:10:41.790482 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 11 00:10:41.790534 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 11 00:10:41.790587 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 11 00:10:41.790639 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 11 00:10:41.790690 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 11 00:10:41.790746 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 11 00:10:41.790797 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 11 00:10:41.790848 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 11 00:10:41.790902 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 11 00:10:41.790954 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 11 00:10:41.791005 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 11 00:10:41.791056 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 11 00:10:41.791109 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 11 00:10:41.791161 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 11 00:10:41.791212 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 11 00:10:41.791792 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 11 00:10:41.791854 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 11 00:10:41.791908 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 11 00:10:41.792314 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 11 00:10:41.792375 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 11 00:10:41.792430 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 11 00:10:41.792483 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 11 00:10:41.792537 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 11 00:10:41.792589 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 11 00:10:41.792644 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 11 00:10:41.792699 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 11 00:10:41.792751 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 11 00:10:41.792809 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 11 00:10:41.792862 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 11 00:10:41.792919 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 11 00:10:41.792975 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 11 00:10:41.793028 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 11 00:10:41.793081 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 11 00:10:41.793153 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 11 00:10:41.793210 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 11 00:10:41.793622 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 11 00:10:41.793677 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 11 00:10:41.793724 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 11 00:10:41.793769 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 11 00:10:41.793820 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 11 00:10:41.793867 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 11 00:10:41.793916 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 11 00:10:41.793963 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 11 00:10:41.794008 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 11 00:10:41.794055 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 11 00:10:41.794102 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 11 00:10:41.794163 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 11 00:10:41.794215 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 11 00:10:41.794277 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 11 00:10:41.794329 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 11 00:10:41.794381 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 11 00:10:41.794429 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 11 00:10:41.794475 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 11 00:10:41.794526 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 11 00:10:41.794573 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 11 00:10:41.794623 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 11 00:10:41.794674 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 11 00:10:41.794722 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 11 00:10:41.794773 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 11 00:10:41.794822 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 11 00:10:41.794874 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 11 00:10:41.794925 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 11 00:10:41.794989 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 11 00:10:41.795038 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 11 00:10:41.795089 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 11 00:10:41.795137 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 11 00:10:41.795199 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 11 00:10:41.795256 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 11 00:10:41.795306 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 11 00:10:41.795357 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 11 00:10:41.795405 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 11 00:10:41.795452 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 11 00:10:41.795504 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 11 00:10:41.795553 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 11 00:10:41.795605 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 11 00:10:41.795657 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 11 00:10:41.795704 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 11 00:10:41.795756 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 11 00:10:41.795804 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 11 00:10:41.795859 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 11 00:10:41.795910 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 11 00:10:41.795976 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 11 00:10:41.796025 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 11 00:10:41.796076 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 11 00:10:41.796125 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 11 00:10:41.796176 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 11 00:10:41.796224 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 11 00:10:41.796770 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 11 00:10:41.796830 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 11 00:10:41.796881 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 11 00:10:41.796929 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 11 00:10:41.796981 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 11 00:10:41.797034 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 11 00:10:41.797086 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 11 00:10:41.797139 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 11 00:10:41.797187 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 11 00:10:41.797239 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 11 00:10:41.797303 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 11 00:10:41.797356 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 11 00:10:41.797404 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 11 00:10:41.797459 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 11 00:10:41.797507 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 11 00:10:41.797562 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 11 00:10:41.797610 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 11 00:10:41.797661 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 11 00:10:41.797710 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 11 00:10:41.797760 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 11 00:10:41.797811 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 11 00:10:41.797859 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 11 00:10:41.797907 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 11 00:10:41.797958 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 11 00:10:41.798006 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 11 00:10:41.798059 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 11 00:10:41.798108 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 11 00:10:41.798160 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 11 00:10:41.798209 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 11 00:10:41.798303 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 11 00:10:41.798352 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 11 00:10:41.798405 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 11 00:10:41.798453 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 11 00:10:41.798506 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 11 00:10:41.798769 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 11 00:10:41.798831 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 11 00:10:41.798841 kernel: PCI: CLS 32 bytes, default 64 Jul 11 00:10:41.798849 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 11 00:10:41.798858 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 11 00:10:41.798864 kernel: clocksource: Switched to clocksource tsc Jul 11 00:10:41.798871 kernel: Initialise system trusted keyrings Jul 11 00:10:41.798877 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 11 00:10:41.798883 kernel: Key type asymmetric registered Jul 11 00:10:41.798889 kernel: Asymmetric key parser 'x509' registered Jul 11 00:10:41.798896 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 11 00:10:41.798902 kernel: io scheduler mq-deadline registered Jul 11 00:10:41.798908 kernel: io scheduler kyber registered Jul 11 00:10:41.798916 kernel: io scheduler bfq registered Jul 11 00:10:41.798972 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 11 00:10:41.799025 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799080 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 11 00:10:41.799132 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799185 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 11 00:10:41.799240 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799411 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 11 00:10:41.799484 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799538 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 11 00:10:41.799589 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799642 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 11 00:10:41.799692 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799749 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 11 00:10:41.799799 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799851 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 11 00:10:41.799903 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.799987 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 11 00:10:41.800037 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800091 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 11 00:10:41.800142 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800194 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 11 00:10:41.800399 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800485 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 11 00:10:41.800759 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800850 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 11 00:10:41.800907 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.800967 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 11 00:10:41.801020 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.801075 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 11 00:10:41.801132 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.801187 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 11 00:10:41.801240 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.801515 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 11 00:10:41.801571 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.801626 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 11 00:10:41.801679 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802064 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 11 00:10:41.802125 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802181 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 11 00:10:41.802234 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802303 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 11 00:10:41.802359 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802417 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 11 00:10:41.802471 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802526 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 11 00:10:41.802578 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802633 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 11 00:10:41.802688 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802742 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 11 00:10:41.802795 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802848 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 11 00:10:41.802900 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.802954 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 11 00:10:41.803010 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803063 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 11 00:10:41.803116 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803170 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 11 00:10:41.803223 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803337 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 11 00:10:41.803393 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803445 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 11 00:10:41.803498 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803550 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 11 00:10:41.803602 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 11 00:10:41.803613 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:10:41.803620 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:10:41.803627 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:10:41.803633 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 11 00:10:41.803640 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:10:41.803646 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:10:41.803699 kernel: rtc_cmos 00:01: registered as rtc0 Jul 11 00:10:41.803748 kernel: rtc_cmos 00:01: setting system clock to 2025-07-11T00:10:41 UTC (1752192641) Jul 11 00:10:41.803798 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 11 00:10:41.803807 kernel: intel_pstate: CPU model not supported Jul 11 00:10:41.803813 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:10:41.803820 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:10:41.803826 kernel: Segment Routing with IPv6 Jul 11 00:10:41.803832 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:10:41.803838 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:10:41.803845 kernel: Key type dns_resolver registered Jul 11 00:10:41.803851 kernel: IPI shorthand broadcast: enabled Jul 11 00:10:41.803859 kernel: sched_clock: Marking stable (915393455, 226337408)->(1204330014, -62599151) Jul 11 00:10:41.803866 kernel: registered taskstats version 1 Jul 11 00:10:41.803872 kernel: Loading compiled-in X.509 certificates Jul 11 00:10:41.803878 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5956f0842928c96096c398e9db55919cd236a39f' Jul 11 00:10:41.803884 kernel: Key type .fscrypt registered Jul 11 00:10:41.803890 kernel: Key type fscrypt-provisioning registered Jul 11 00:10:41.803897 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:10:41.803903 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:10:41.803910 kernel: ima: No architecture policies found Jul 11 00:10:41.803916 kernel: clk: Disabling unused clocks Jul 11 00:10:41.803927 kernel: Freeing unused kernel image (initmem) memory: 42872K Jul 11 00:10:41.803933 kernel: Write protecting the kernel read-only data: 36864k Jul 11 00:10:41.803940 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Jul 11 00:10:41.803946 kernel: Run /init as init process Jul 11 00:10:41.803952 kernel: with arguments: Jul 11 00:10:41.803959 kernel: /init Jul 11 00:10:41.803965 kernel: with environment: Jul 11 00:10:41.803971 kernel: HOME=/ Jul 11 00:10:41.803978 kernel: TERM=linux Jul 11 00:10:41.803985 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:10:41.803992 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:10:41.804000 systemd[1]: Detected virtualization vmware. Jul 11 00:10:41.804007 systemd[1]: Detected architecture x86-64. Jul 11 00:10:41.804014 systemd[1]: Running in initrd. Jul 11 00:10:41.804020 systemd[1]: No hostname configured, using default hostname. Jul 11 00:10:41.804028 systemd[1]: Hostname set to . Jul 11 00:10:41.804035 systemd[1]: Initializing machine ID from random generator. Jul 11 00:10:41.804041 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:10:41.804048 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:10:41.804054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:10:41.804061 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:10:41.804068 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:10:41.804075 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:10:41.804082 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:10:41.804090 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:10:41.804097 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:10:41.804103 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:10:41.804110 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:10:41.804116 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:10:41.804123 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:10:41.804131 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:10:41.804138 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:10:41.804144 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:10:41.804151 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:10:41.804157 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:10:41.804164 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:10:41.804170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:10:41.804177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:10:41.804187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:10:41.804201 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:10:41.804210 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:10:41.804220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:10:41.804230 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:10:41.804241 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:10:41.804263 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:10:41.804270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:10:41.804276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:41.804285 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:10:41.804305 systemd-journald[215]: Collecting audit messages is disabled. Jul 11 00:10:41.804323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:10:41.804329 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:10:41.804338 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:10:41.804345 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:10:41.804352 kernel: Bridge firewalling registered Jul 11 00:10:41.804358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:10:41.804365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:41.804373 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:10:41.804380 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:10:41.804387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:10:41.804394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:10:41.804401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:10:41.804408 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:41.804415 systemd-journald[215]: Journal started Jul 11 00:10:41.804431 systemd-journald[215]: Runtime Journal (/run/log/journal/8be19981d7ad44bb9fa3689d66c4d0c7) is 4.8M, max 38.6M, 33.8M free. Jul 11 00:10:41.744289 systemd-modules-load[216]: Inserted module 'overlay' Jul 11 00:10:41.769055 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 11 00:10:41.806462 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:10:41.806732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:10:41.811346 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:10:41.813324 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:10:41.818209 dracut-cmdline[245]: dracut-dracut-053 Jul 11 00:10:41.819654 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:10:41.820049 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:10:41.826369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:10:41.843678 systemd-resolved[261]: Positive Trust Anchors: Jul 11 00:10:41.843689 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:10:41.843711 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:10:41.846301 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 11 00:10:41.847028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:10:41.847515 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:10:41.871278 kernel: SCSI subsystem initialized Jul 11 00:10:41.878262 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:10:41.886265 kernel: iscsi: registered transport (tcp) Jul 11 00:10:41.900613 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:10:41.900666 kernel: QLogic iSCSI HBA Driver Jul 11 00:10:41.921660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:10:41.930425 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:10:41.947051 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:10:41.947115 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:10:41.947125 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:10:41.979272 kernel: raid6: avx2x4 gen() 48754 MB/s Jul 11 00:10:41.996267 kernel: raid6: avx2x2 gen() 51514 MB/s Jul 11 00:10:42.013497 kernel: raid6: avx2x1 gen() 43524 MB/s Jul 11 00:10:42.013554 kernel: raid6: using algorithm avx2x2 gen() 51514 MB/s Jul 11 00:10:42.031517 kernel: raid6: .... xor() 30565 MB/s, rmw enabled Jul 11 00:10:42.031584 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:10:42.045264 kernel: xor: automatically using best checksumming function avx Jul 11 00:10:42.146262 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:10:42.151293 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:10:42.156330 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:10:42.163609 systemd-udevd[433]: Using default interface naming scheme 'v255'. Jul 11 00:10:42.166113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:10:42.171351 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:10:42.178018 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Jul 11 00:10:42.193573 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:10:42.201354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:10:42.274523 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:10:42.279906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:10:42.291144 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:10:42.292297 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:10:42.292669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:10:42.293016 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:10:42.298040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:10:42.306426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:10:42.347308 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 11 00:10:42.353079 kernel: vmw_pvscsi: using 64bit dma Jul 11 00:10:42.353115 kernel: vmw_pvscsi: max_id: 16 Jul 11 00:10:42.353124 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 11 00:10:42.357282 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 11 00:10:42.361590 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 11 00:10:42.361734 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:10:42.365969 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 11 00:10:42.366110 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 11 00:10:42.366120 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 11 00:10:42.366128 kernel: vmw_pvscsi: using MSI-X Jul 11 00:10:42.370302 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 11 00:10:42.372814 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 11 00:10:42.372946 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 11 00:10:42.382831 kernel: AVX2 version of gcm_enc/dec engaged. Jul 11 00:10:42.382871 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 11 00:10:42.381499 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:10:42.381574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:42.383105 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:10:42.383265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:10:42.383484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:42.383936 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:42.386257 kernel: libata version 3.00 loaded. Jul 11 00:10:42.388283 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 11 00:10:42.388392 kernel: AES CTR mode by8 optimization enabled Jul 11 00:10:42.388858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:42.389346 kernel: scsi host1: ata_piix Jul 11 00:10:42.390256 kernel: scsi host2: ata_piix Jul 11 00:10:42.392060 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 11 00:10:42.392077 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 11 00:10:42.402606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:42.411477 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:10:42.424174 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:42.561265 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 11 00:10:42.565259 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 11 00:10:42.579667 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 11 00:10:42.579840 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 11 00:10:42.579910 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 11 00:10:42.579992 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 11 00:10:42.580057 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 11 00:10:42.582027 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 11 00:10:42.582133 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:10:42.584745 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 11 00:10:42.584762 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 11 00:10:42.594330 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:10:42.623311 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (491) Jul 11 00:10:42.623560 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 11 00:10:42.629260 kernel: BTRFS: device fsid 54fb9359-b495-4b0c-b313-b0e2955e4a38 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (492) Jul 11 00:10:42.629770 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 11 00:10:42.632711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 11 00:10:42.634965 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 11 00:10:42.635107 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 11 00:10:42.639343 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:10:42.665263 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 11 00:10:42.671272 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 11 00:10:43.673299 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 11 00:10:43.673909 disk-uuid[588]: The operation has completed successfully. Jul 11 00:10:43.710927 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:10:43.710987 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:10:43.722361 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:10:43.724405 sh[605]: Success Jul 11 00:10:43.733261 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 11 00:10:43.774984 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:10:43.780035 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:10:43.780368 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:10:43.794636 kernel: BTRFS info (device dm-0): first mount of filesystem 54fb9359-b495-4b0c-b313-b0e2955e4a38 Jul 11 00:10:43.794664 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:10:43.794673 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:10:43.796501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:10:43.796514 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:10:43.804259 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 11 00:10:43.804872 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:10:43.814312 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 11 00:10:43.815395 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:10:43.832370 kernel: BTRFS info (device sda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:43.832402 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:10:43.834261 kernel: BTRFS info (device sda6): using free space tree Jul 11 00:10:43.838383 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 11 00:10:43.843500 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:10:43.845296 kernel: BTRFS info (device sda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:43.849630 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:10:43.856357 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:10:43.880051 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 11 00:10:43.885684 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:10:43.927204 ignition[665]: Ignition 2.19.0 Jul 11 00:10:43.927211 ignition[665]: Stage: fetch-offline Jul 11 00:10:43.927230 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:43.927239 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:43.929549 ignition[665]: parsed url from cmdline: "" Jul 11 00:10:43.929553 ignition[665]: no config URL provided Jul 11 00:10:43.929557 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:10:43.929564 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:10:43.929914 ignition[665]: config successfully fetched Jul 11 00:10:43.930117 ignition[665]: parsing config with SHA512: 7ca530a1a01ad213fb2da54a7ae179db590ebd24d662bc5374b0178f5a6bcec3ef5c0b93e61524971fb3c25b96f4d3d1bd28a7fb12d504d1df0d8d677ac8b301 Jul 11 00:10:43.932589 unknown[665]: fetched base config from "system" Jul 11 00:10:43.932712 unknown[665]: fetched user config from "vmware" Jul 11 00:10:43.933059 ignition[665]: fetch-offline: fetch-offline passed Jul 11 00:10:43.933221 ignition[665]: Ignition finished successfully Jul 11 00:10:43.933901 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:10:43.951970 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:10:43.955335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:10:43.967229 systemd-networkd[798]: lo: Link UP Jul 11 00:10:43.967235 systemd-networkd[798]: lo: Gained carrier Jul 11 00:10:43.968040 systemd-networkd[798]: Enumeration completed Jul 11 00:10:43.968201 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:10:43.968311 systemd-networkd[798]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 11 00:10:43.968433 systemd[1]: Reached target network.target - Network. Jul 11 00:10:43.968529 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:10:43.972206 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 11 00:10:43.972332 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 11 00:10:43.972085 systemd-networkd[798]: ens192: Link UP Jul 11 00:10:43.972087 systemd-networkd[798]: ens192: Gained carrier Jul 11 00:10:43.977066 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:10:43.984445 ignition[800]: Ignition 2.19.0 Jul 11 00:10:43.984451 ignition[800]: Stage: kargs Jul 11 00:10:43.984551 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:43.984557 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:43.985069 ignition[800]: kargs: kargs passed Jul 11 00:10:43.985092 ignition[800]: Ignition finished successfully Jul 11 00:10:43.986120 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:10:43.990414 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:10:43.997697 ignition[807]: Ignition 2.19.0 Jul 11 00:10:43.997703 ignition[807]: Stage: disks Jul 11 00:10:43.997802 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:43.997808 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:43.998379 ignition[807]: disks: disks passed Jul 11 00:10:43.998402 ignition[807]: Ignition finished successfully Jul 11 00:10:43.999084 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:10:43.999520 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:10:43.999802 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:10:44.000056 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:10:44.000290 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:10:44.000511 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:10:44.004446 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:10:44.015440 systemd-fsck[816]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 11 00:10:44.016411 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:10:44.020304 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:10:44.074991 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:10:44.075258 kernel: EXT4-fs (sda9): mounted filesystem 66ba5133-8c5a-461b-b2c1-a823c72af79b r/w with ordered data mode. Quota mode: none. Jul 11 00:10:44.075335 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:10:44.084349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:10:44.085627 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:10:44.085888 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:10:44.085951 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:10:44.085966 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:10:44.088962 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:10:44.089579 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:10:44.092261 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (824) Jul 11 00:10:44.094597 kernel: BTRFS info (device sda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:44.094615 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:10:44.094623 kernel: BTRFS info (device sda6): using free space tree Jul 11 00:10:44.099338 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 11 00:10:44.099940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:10:44.119017 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:10:44.121890 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:10:44.124021 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:10:44.126110 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:10:44.176878 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:10:44.184374 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:10:44.186731 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:10:44.190257 kernel: BTRFS info (device sda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:44.200990 ignition[937]: INFO : Ignition 2.19.0 Jul 11 00:10:44.201417 ignition[937]: INFO : Stage: mount Jul 11 00:10:44.201632 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:44.201781 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:44.202646 ignition[937]: INFO : mount: mount passed Jul 11 00:10:44.202646 ignition[937]: INFO : Ignition finished successfully Jul 11 00:10:44.203526 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:10:44.207358 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:10:44.207548 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:10:44.310967 systemd-resolved[261]: Detected conflict on linux IN A 139.178.70.105 Jul 11 00:10:44.310977 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jul 11 00:10:44.793189 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:10:44.799385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:10:44.840275 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Jul 11 00:10:44.842274 kernel: BTRFS info (device sda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:10:44.842309 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:10:44.844326 kernel: BTRFS info (device sda6): using free space tree Jul 11 00:10:44.848262 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 11 00:10:44.849302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:10:44.869091 ignition[966]: INFO : Ignition 2.19.0 Jul 11 00:10:44.869091 ignition[966]: INFO : Stage: files Jul 11 00:10:44.869091 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:44.869091 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:44.869091 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:10:44.869876 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:10:44.870023 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:10:44.872348 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:10:44.872561 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:10:44.872891 unknown[966]: wrote ssh authorized keys file for user: core Jul 11 00:10:44.873104 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:10:44.874743 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:10:44.874920 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 11 00:10:44.921983 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:10:45.056794 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:10:45.056794 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:10:45.057218 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:10:45.058325 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 11 00:10:45.591195 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:10:45.842339 systemd-networkd[798]: ens192: Gained IPv6LL Jul 11 00:10:46.718764 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:10:46.719203 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 11 00:10:46.719203 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:10:46.719203 ignition[966]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 00:10:46.720879 ignition[966]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:10:46.758577 ignition[966]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:10:46.761268 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:10:46.761268 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:10:46.761268 ignition[966]: INFO : files: files passed Jul 11 00:10:46.761268 ignition[966]: INFO : Ignition finished successfully Jul 11 00:10:46.762384 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:10:46.767372 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:10:46.769024 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:10:46.774824 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:10:46.774882 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:10:46.782336 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:10:46.782336 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:10:46.783112 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:10:46.784359 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:10:46.785013 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:10:46.788394 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:10:46.812058 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:10:46.812147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:10:46.812479 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:10:46.812615 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:10:46.812838 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:10:46.813534 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:10:46.824403 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:10:46.829458 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:10:46.837570 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:10:46.837818 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:10:46.838154 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:10:46.838436 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:10:46.838565 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:10:46.838986 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:10:46.839197 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:10:46.839398 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:10:46.839591 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:10:46.840039 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:10:46.840267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:10:46.840502 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:10:46.840729 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:10:46.840944 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:10:46.841145 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:10:46.841321 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:10:46.841434 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:10:46.841796 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:10:46.842003 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:10:46.842179 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:10:46.842258 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:10:46.842526 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:10:46.842621 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:10:46.842886 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:10:46.842990 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:10:46.843361 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:10:46.843524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:10:46.847311 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:10:46.847703 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:10:46.847963 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:10:46.848151 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:10:46.848241 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:10:46.848505 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:10:46.848580 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:10:46.848852 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:10:46.848954 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:10:46.849221 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:10:46.849328 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:10:46.853425 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:10:46.855495 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:10:46.855689 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:10:46.855835 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:10:46.856170 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:10:46.856355 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:10:46.861536 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:10:46.861633 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:10:46.865275 ignition[1020]: INFO : Ignition 2.19.0 Jul 11 00:10:46.865275 ignition[1020]: INFO : Stage: umount Jul 11 00:10:46.865275 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:46.865275 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 11 00:10:46.866425 ignition[1020]: INFO : umount: umount passed Jul 11 00:10:46.866610 ignition[1020]: INFO : Ignition finished successfully Jul 11 00:10:46.867380 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:10:46.867609 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:10:46.868194 systemd[1]: Stopped target network.target - Network. Jul 11 00:10:46.868568 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:10:46.868750 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:10:46.868881 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:10:46.868907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:10:46.869021 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:10:46.869044 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:10:46.869153 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:10:46.869174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:10:46.870214 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:10:46.870387 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:10:46.874134 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:10:46.874219 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:10:46.874812 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:10:46.874841 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:10:46.878616 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:10:46.878723 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:10:46.878759 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:10:46.878911 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 11 00:10:46.878935 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 11 00:10:46.879110 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:10:46.879889 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:10:46.883731 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:10:46.883973 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:10:46.885748 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:10:46.886085 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:10:46.886414 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:10:46.886441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:10:46.886860 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:10:46.886885 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:10:46.887546 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:10:46.887625 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:10:46.888539 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:10:46.888577 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:10:46.888711 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:10:46.888730 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:10:46.888841 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:10:46.888867 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:10:46.889035 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:10:46.889056 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:10:46.889197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:10:46.889221 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:46.893428 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:10:46.893780 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:10:46.893820 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:10:46.893962 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 00:10:46.893990 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:10:46.894110 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:10:46.894133 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:10:46.894257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:10:46.894280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:46.894581 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:10:46.894664 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:10:46.897423 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:10:46.897650 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:10:46.944199 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:10:46.944279 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:10:46.944792 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:10:46.944913 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:10:46.944948 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:10:46.947430 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:10:46.952785 systemd[1]: Switching root. Jul 11 00:10:46.990455 systemd-journald[215]: Journal stopped Jul 11 00:10:48.747136 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Jul 11 00:10:48.747169 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:10:48.747178 kernel: SELinux: policy capability open_perms=1 Jul 11 00:10:48.747184 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:10:48.747190 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:10:48.747196 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:10:48.747204 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:10:48.747211 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:10:48.747217 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:10:48.747223 systemd[1]: Successfully loaded SELinux policy in 38.360ms. Jul 11 00:10:48.747231 kernel: audit: type=1403 audit(1752192647.841:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:10:48.747238 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.845ms. Jul 11 00:10:48.747381 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:10:48.747403 systemd[1]: Detected virtualization vmware. Jul 11 00:10:48.747411 systemd[1]: Detected architecture x86-64. Jul 11 00:10:48.747418 systemd[1]: Detected first boot. Jul 11 00:10:48.747426 systemd[1]: Initializing machine ID from random generator. Jul 11 00:10:48.747434 zram_generator::config[1064]: No configuration found. Jul 11 00:10:48.747442 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:10:48.747452 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 11 00:10:48.747460 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 11 00:10:48.747467 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:10:48.747474 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:10:48.747481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:10:48.747490 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:10:48.747621 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:10:48.747632 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:10:48.747640 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:10:48.747647 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:10:48.747654 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:10:48.747664 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:10:48.747673 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:10:48.747681 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:10:48.747688 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:10:48.747695 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:10:48.747702 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:10:48.747710 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:10:48.747717 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:10:48.747815 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:10:48.747828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:10:48.747836 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:10:48.747846 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:10:48.747853 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:10:48.747860 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:10:48.747868 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:10:48.747875 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:10:48.747882 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:10:48.747892 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:10:48.747900 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:10:48.747907 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:10:48.747915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:10:48.747924 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:10:48.747936 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:10:48.747943 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:10:48.747951 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:10:48.748025 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:10:48.748038 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:10:48.748046 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:10:48.748054 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:10:48.748061 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:10:48.748071 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:10:48.748079 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:10:48.748087 systemd[1]: Reached target machines.target - Containers. Jul 11 00:10:48.748094 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:10:48.748102 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 11 00:10:48.748109 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:10:48.748117 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:10:48.748124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:10:48.748133 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:10:48.748142 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:10:48.748149 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:10:48.748156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:10:48.748164 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:10:48.748172 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:10:48.748179 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:10:48.748688 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:10:48.748705 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:10:48.748717 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:10:48.748725 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:10:48.748732 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:10:48.748740 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:10:48.748748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:10:48.748755 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:10:48.748763 systemd[1]: Stopped verity-setup.service. Jul 11 00:10:48.748770 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:10:48.748803 systemd-journald[1151]: Collecting audit messages is disabled. Jul 11 00:10:48.748822 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:10:48.748829 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:10:48.748837 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:10:48.748846 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:10:48.748854 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:10:48.748862 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:10:48.748870 systemd-journald[1151]: Journal started Jul 11 00:10:48.748886 systemd-journald[1151]: Runtime Journal (/run/log/journal/d9d67bc29f144a81a31cea1ed29db368) is 4.8M, max 38.6M, 33.8M free. Jul 11 00:10:48.568897 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:10:48.587895 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 11 00:10:48.588269 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:10:48.749554 jq[1131]: true Jul 11 00:10:48.753365 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:10:48.754907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:10:48.755221 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:10:48.755345 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:10:48.755609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:10:48.755705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:10:48.755980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:10:48.756072 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:10:48.756774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:10:48.757055 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:10:48.757674 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:10:48.767663 jq[1159]: true Jul 11 00:10:48.775695 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:10:48.779551 kernel: fuse: init (API version 7.39) Jul 11 00:10:48.785068 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:10:48.785234 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:10:48.785315 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:10:48.787010 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:10:48.790813 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:10:48.805190 kernel: loop: module loaded Jul 11 00:10:48.803406 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:10:48.803648 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:10:48.805733 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:10:48.811504 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:10:48.811693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:10:48.814621 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:10:48.816409 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:10:48.820446 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:10:48.824433 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:10:48.825551 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:10:48.826287 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:10:48.826579 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:10:48.827066 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:10:48.827430 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:10:48.828281 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:10:48.850135 kernel: ACPI: bus type drm_connector registered Jul 11 00:10:48.839456 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:10:48.839626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:10:48.839845 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:10:48.839969 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:10:48.843763 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:10:48.860509 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:10:48.870165 systemd-journald[1151]: Time spent on flushing to /var/log/journal/d9d67bc29f144a81a31cea1ed29db368 is 39.459ms for 1836 entries. Jul 11 00:10:48.870165 systemd-journald[1151]: System Journal (/var/log/journal/d9d67bc29f144a81a31cea1ed29db368) is 8.0M, max 584.8M, 576.8M free. Jul 11 00:10:48.992106 systemd-journald[1151]: Received client request to flush runtime journal. Jul 11 00:10:48.992156 kernel: loop0: detected capacity change from 0 to 2976 Jul 11 00:10:48.894152 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:10:48.919494 ignition[1178]: Ignition 2.19.0 Jul 11 00:10:48.951353 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 11 00:10:48.919748 ignition[1178]: deleting config from guestinfo properties Jul 11 00:10:48.953303 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:10:48.947782 ignition[1178]: Successfully deleted config Jul 11 00:10:48.954891 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:10:48.963415 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:10:48.977832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:10:48.978585 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 11 00:10:48.978594 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 11 00:10:48.988844 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:10:48.994658 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:10:48.996378 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:10:48.997119 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:10:49.009396 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:10:49.027040 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:10:49.028033 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:10:49.042296 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:10:49.069793 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:10:49.072375 kernel: loop1: detected capacity change from 0 to 142488 Jul 11 00:10:49.078286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:10:49.099403 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jul 11 00:10:49.099422 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jul 11 00:10:49.106492 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:10:49.139352 kernel: loop2: detected capacity change from 0 to 140768 Jul 11 00:10:49.210275 kernel: loop3: detected capacity change from 0 to 224512 Jul 11 00:10:49.283293 kernel: loop4: detected capacity change from 0 to 2976 Jul 11 00:10:49.328388 kernel: loop5: detected capacity change from 0 to 142488 Jul 11 00:10:49.362300 kernel: loop6: detected capacity change from 0 to 140768 Jul 11 00:10:49.492485 kernel: loop7: detected capacity change from 0 to 224512 Jul 11 00:10:49.525594 (sd-merge)[1237]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 11 00:10:49.525953 (sd-merge)[1237]: Merged extensions into '/usr'. Jul 11 00:10:49.535598 systemd[1]: Reloading requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:10:49.536365 systemd[1]: Reloading... Jul 11 00:10:49.579268 zram_generator::config[1259]: No configuration found. Jul 11 00:10:49.683918 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 11 00:10:49.700105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:10:49.724715 ldconfig[1189]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:10:49.728185 systemd[1]: Reloading finished in 191 ms. Jul 11 00:10:49.750722 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:10:49.751041 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:10:49.751291 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:10:49.756347 systemd[1]: Starting ensure-sysext.service... Jul 11 00:10:49.758334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:10:49.761464 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:10:49.764300 systemd[1]: Reloading requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:10:49.764310 systemd[1]: Reloading... Jul 11 00:10:49.780904 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:10:49.781112 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:10:49.781629 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:10:49.781800 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jul 11 00:10:49.781841 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jul 11 00:10:49.783586 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Jul 11 00:10:49.784830 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:10:49.784837 systemd-tmpfiles[1321]: Skipping /boot Jul 11 00:10:49.790966 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:10:49.790973 systemd-tmpfiles[1321]: Skipping /boot Jul 11 00:10:49.816315 zram_generator::config[1344]: No configuration found. Jul 11 00:10:49.920258 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 00:10:49.918539 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 11 00:10:49.926363 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1361) Jul 11 00:10:49.930261 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:10:49.937496 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:10:49.973732 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 00:10:49.974019 systemd[1]: Reloading finished in 209 ms. Jul 11 00:10:49.985726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:10:49.989485 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:10:50.000785 systemd[1]: Finished ensure-sysext.service. Jul 11 00:10:50.007608 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 11 00:10:50.007504 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 11 00:10:50.010306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:10:50.015337 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:10:50.017063 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:10:50.019346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:10:50.020347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:10:50.021423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:10:50.023661 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:10:50.023839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:10:50.026351 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:10:50.028384 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:10:50.031018 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:10:50.038368 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:10:50.045375 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:10:50.047330 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:10:50.048237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:10:50.048575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:10:50.048679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:10:50.054280 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 11 00:10:50.055420 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:10:50.062270 kernel: Guest personality initialized and is active Jul 11 00:10:50.064269 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 00:10:50.064338 kernel: Initialized host personality Jul 11 00:10:50.074295 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jul 11 00:10:50.103076 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:10:50.103287 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:10:50.105625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:10:50.105741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:10:50.110083 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:10:50.110495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:10:50.111054 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:10:50.112023 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:10:50.113041 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:10:50.113099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:10:50.121342 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:10:50.123972 (udev-worker)[1359]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 11 00:10:50.125609 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:10:50.126283 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:10:50.136492 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:10:50.136632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:10:50.141500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:50.142410 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:10:50.147358 augenrules[1476]: No rules Jul 11 00:10:50.147337 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:10:50.155419 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:10:50.155825 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:10:50.161089 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:10:50.183634 lvm[1491]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:10:50.197652 systemd-networkd[1443]: lo: Link UP Jul 11 00:10:50.205297 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 11 00:10:50.205458 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 11 00:10:50.200273 systemd-networkd[1443]: lo: Gained carrier Jul 11 00:10:50.201056 systemd-networkd[1443]: Enumeration completed Jul 11 00:10:50.201126 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:10:50.201284 systemd-networkd[1443]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 11 00:10:50.206880 systemd-networkd[1443]: ens192: Link UP Jul 11 00:10:50.207219 systemd-networkd[1443]: ens192: Gained carrier Jul 11 00:10:50.207425 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:10:50.211276 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:10:50.211742 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:10:50.218427 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:10:50.223525 lvm[1497]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:10:50.225845 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:10:50.226051 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:10:50.228005 systemd-resolved[1444]: Positive Trust Anchors: Jul 11 00:10:50.228013 systemd-resolved[1444]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:10:50.228036 systemd-resolved[1444]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:10:50.231053 systemd-resolved[1444]: Defaulting to hostname 'linux'. Jul 11 00:10:50.232109 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:10:50.232321 systemd[1]: Reached target network.target - Network. Jul 11 00:10:50.232413 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:10:50.243157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:50.243519 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:10:50.243743 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:10:50.243919 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:10:50.244202 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:10:50.244413 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:10:50.244570 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:10:50.244721 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:10:50.244743 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:10:50.244861 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:10:50.245844 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:10:50.247661 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:10:50.251746 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:10:50.252606 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:10:50.252862 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:10:50.253403 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:10:50.253554 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:10:50.253730 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:10:50.253755 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:10:50.254777 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:10:50.258205 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:10:50.259328 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:10:50.261363 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:10:50.261468 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:10:50.263336 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:10:50.266007 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:10:50.270572 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:10:50.272848 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:10:50.276448 jq[1507]: false Jul 11 00:10:50.282477 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:10:50.284211 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:10:50.284676 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:10:50.286421 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:10:50.287315 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:10:50.292296 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 11 00:10:50.293449 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:10:50.293562 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:10:50.296458 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:10:50.297436 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:10:50.306113 dbus-daemon[1506]: [system] SELinux support is enabled Jul 11 00:10:50.306235 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:12:16.247272 systemd-resolved[1444]: Clock change detected. Flushing caches. Jul 11 00:12:16.247301 systemd-timesyncd[1448]: Contacted time server 23.142.248.8:123 (0.flatcar.pool.ntp.org). Jul 11 00:12:16.247325 systemd-timesyncd[1448]: Initial clock synchronization to Fri 2025-07-11 00:12:16.247244 UTC. Jul 11 00:12:16.257798 update_engine[1517]: I20250711 00:12:16.256440 1517 main.cc:92] Flatcar Update Engine starting Jul 11 00:12:16.257798 update_engine[1517]: I20250711 00:12:16.257444 1517 update_check_scheduler.cc:74] Next update check in 4m49s Jul 11 00:12:16.258390 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:12:16.258414 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:12:16.259074 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:12:16.259087 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:12:16.260850 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:12:16.264463 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:12:16.269716 jq[1518]: true Jul 11 00:12:16.270340 extend-filesystems[1508]: Found loop4 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found loop5 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found loop6 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found loop7 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found sda Jul 11 00:12:16.270618 extend-filesystems[1508]: Found sda1 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found sda2 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found sda3 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found usr Jul 11 00:12:16.270618 extend-filesystems[1508]: Found sda4 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found sda6 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found sda7 Jul 11 00:12:16.270618 extend-filesystems[1508]: Found sda9 Jul 11 00:12:16.270618 extend-filesystems[1508]: Checking size of /dev/sda9 Jul 11 00:12:16.278273 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:12:16.278388 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:12:16.280145 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 11 00:12:16.283596 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 11 00:12:16.286376 (ntainerd)[1532]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:12:16.286819 tar[1521]: linux-amd64/LICENSE Jul 11 00:12:16.286819 tar[1521]: linux-amd64/helm Jul 11 00:12:16.291068 extend-filesystems[1508]: Old size kept for /dev/sda9 Jul 11 00:12:16.291211 jq[1539]: true Jul 11 00:12:16.291830 extend-filesystems[1508]: Found sr0 Jul 11 00:12:16.304325 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:12:16.304442 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:12:16.306112 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 11 00:12:16.318951 systemd-logind[1514]: Watching system buttons on /dev/input/event1 (Power Button) Jul 11 00:12:16.320358 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:12:16.320464 systemd-logind[1514]: New seat seat0. Jul 11 00:12:16.320946 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:12:16.328108 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1356) Jul 11 00:12:16.349658 unknown[1541]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 11 00:12:16.357332 unknown[1541]: Core dump limit set to -1 Jul 11 00:12:16.366505 bash[1568]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:12:16.371146 kernel: NET: Registered PF_VSOCK protocol family Jul 11 00:12:16.370680 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:12:16.373143 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:12:16.412129 sshd_keygen[1536]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:12:16.453148 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:12:16.462242 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:12:16.471217 locksmithd[1534]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:12:16.472209 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:12:16.472344 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:12:16.478249 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:12:16.487640 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:12:16.496250 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:12:16.506439 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:12:16.506857 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:12:16.600855 containerd[1532]: time="2025-07-11T00:12:16.600811307Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:12:16.623400 containerd[1532]: time="2025-07-11T00:12:16.623351789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:16.624938 containerd[1532]: time="2025-07-11T00:12:16.624916524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:16.624938 containerd[1532]: time="2025-07-11T00:12:16.624934458Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:12:16.624995 containerd[1532]: time="2025-07-11T00:12:16.624944136Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:12:16.625101 containerd[1532]: time="2025-07-11T00:12:16.625047218Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:12:16.625101 containerd[1532]: time="2025-07-11T00:12:16.625057385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625101 containerd[1532]: time="2025-07-11T00:12:16.625096289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625139 containerd[1532]: time="2025-07-11T00:12:16.625104385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625315 containerd[1532]: time="2025-07-11T00:12:16.625202166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625315 containerd[1532]: time="2025-07-11T00:12:16.625214581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625315 containerd[1532]: time="2025-07-11T00:12:16.625224124Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625315 containerd[1532]: time="2025-07-11T00:12:16.625229764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625315 containerd[1532]: time="2025-07-11T00:12:16.625270830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625538 containerd[1532]: time="2025-07-11T00:12:16.625388488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625538 containerd[1532]: time="2025-07-11T00:12:16.625471861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:16.625538 containerd[1532]: time="2025-07-11T00:12:16.625480514Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:12:16.625538 containerd[1532]: time="2025-07-11T00:12:16.625521556Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:12:16.625596 containerd[1532]: time="2025-07-11T00:12:16.625548105Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:12:16.627055 containerd[1532]: time="2025-07-11T00:12:16.627041307Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:12:16.627081 containerd[1532]: time="2025-07-11T00:12:16.627066319Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:12:16.627081 containerd[1532]: time="2025-07-11T00:12:16.627075996Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:12:16.627117 containerd[1532]: time="2025-07-11T00:12:16.627084686Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:12:16.627117 containerd[1532]: time="2025-07-11T00:12:16.627092223Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627155729Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627283368Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627335757Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627344649Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627351385Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627359006Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627366433Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627373233Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627380580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627388367Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627395747Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627405622Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627412184Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:12:16.627505 containerd[1532]: time="2025-07-11T00:12:16.627423345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627433468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627440453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627447189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627455006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627462372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627469069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627476037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627482563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627490431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627497236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627503382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627510239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627518873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627532079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627691 containerd[1532]: time="2025-07-11T00:12:16.627541288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627875 containerd[1532]: time="2025-07-11T00:12:16.627547140Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:12:16.627875 containerd[1532]: time="2025-07-11T00:12:16.627572999Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:12:16.627875 containerd[1532]: time="2025-07-11T00:12:16.627583794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:12:16.627875 containerd[1532]: time="2025-07-11T00:12:16.627590327Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:12:16.627875 containerd[1532]: time="2025-07-11T00:12:16.627597582Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:12:16.627875 containerd[1532]: time="2025-07-11T00:12:16.627602647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627875 containerd[1532]: time="2025-07-11T00:12:16.627609125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:12:16.627875 containerd[1532]: time="2025-07-11T00:12:16.627616867Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:12:16.627875 containerd[1532]: time="2025-07-11T00:12:16.627622348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:12:16.627984 containerd[1532]: time="2025-07-11T00:12:16.627770570Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:12:16.627984 containerd[1532]: time="2025-07-11T00:12:16.627807478Z" level=info msg="Connect containerd service" Jul 11 00:12:16.627984 containerd[1532]: time="2025-07-11T00:12:16.627829389Z" level=info msg="using legacy CRI server" Jul 11 00:12:16.627984 containerd[1532]: time="2025-07-11T00:12:16.627834234Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:12:16.627984 containerd[1532]: time="2025-07-11T00:12:16.627892164Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:12:16.628413 containerd[1532]: time="2025-07-11T00:12:16.628221303Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:12:16.628413 containerd[1532]: time="2025-07-11T00:12:16.628361886Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:12:16.628413 containerd[1532]: time="2025-07-11T00:12:16.628386148Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:12:16.628456 containerd[1532]: time="2025-07-11T00:12:16.628429163Z" level=info msg="Start subscribing containerd event" Jul 11 00:12:16.628456 containerd[1532]: time="2025-07-11T00:12:16.628450168Z" level=info msg="Start recovering state" Jul 11 00:12:16.629409 containerd[1532]: time="2025-07-11T00:12:16.628486202Z" level=info msg="Start event monitor" Jul 11 00:12:16.629409 containerd[1532]: time="2025-07-11T00:12:16.628498871Z" level=info msg="Start snapshots syncer" Jul 11 00:12:16.629409 containerd[1532]: time="2025-07-11T00:12:16.628504684Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:12:16.629409 containerd[1532]: time="2025-07-11T00:12:16.628509681Z" level=info msg="Start streaming server" Jul 11 00:12:16.629409 containerd[1532]: time="2025-07-11T00:12:16.628542314Z" level=info msg="containerd successfully booted in 0.028888s" Jul 11 00:12:16.628596 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:12:16.760437 tar[1521]: linux-amd64/README.md Jul 11 00:12:16.768318 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:12:17.541222 systemd-networkd[1443]: ens192: Gained IPv6LL Jul 11 00:12:17.542426 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:12:17.543317 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:12:17.548216 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 11 00:12:17.551143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:12:17.553058 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:12:17.576296 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:12:17.577729 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:12:17.577835 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 11 00:12:17.578368 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:12:18.523895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:12:18.524384 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:12:18.524774 systemd[1]: Startup finished in 998ms (kernel) + 6.206s (initrd) + 4.781s (userspace) = 11.986s. Jul 11 00:12:18.530683 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:12:18.552479 login[1608]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 11 00:12:18.552649 login[1604]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 11 00:12:18.560393 systemd-logind[1514]: New session 2 of user core. Jul 11 00:12:18.561048 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:12:18.566398 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:12:18.569113 systemd-logind[1514]: New session 1 of user core. Jul 11 00:12:18.573365 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:12:18.579168 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:12:18.580914 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:12:18.638945 systemd[1692]: Queued start job for default target default.target. Jul 11 00:12:18.643802 systemd[1692]: Created slice app.slice - User Application Slice. Jul 11 00:12:18.643820 systemd[1692]: Reached target paths.target - Paths. Jul 11 00:12:18.643838 systemd[1692]: Reached target timers.target - Timers. Jul 11 00:12:18.647048 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:12:18.652736 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:12:18.652773 systemd[1692]: Reached target sockets.target - Sockets. Jul 11 00:12:18.652784 systemd[1692]: Reached target basic.target - Basic System. Jul 11 00:12:18.652864 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:12:18.653059 systemd[1692]: Reached target default.target - Main User Target. Jul 11 00:12:18.653081 systemd[1692]: Startup finished in 68ms. Jul 11 00:12:18.658090 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:12:18.659103 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:12:19.044027 kubelet[1685]: E0711 00:12:19.043934 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:12:19.045234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:12:19.045318 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:12:29.295680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:12:29.304230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:12:29.649873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:12:29.652305 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:12:29.715211 kubelet[1734]: E0711 00:12:29.715173 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:12:29.717855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:12:29.717956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:12:39.778929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:12:39.791235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:12:40.107955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:12:40.110384 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:12:40.152612 kubelet[1748]: E0711 00:12:40.152575 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:12:40.153741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:12:40.153826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:12:46.453650 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:12:46.454837 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.68.195:36842.service - OpenSSH per-connection server daemon (139.178.68.195:36842). Jul 11 00:12:46.494965 sshd[1755]: Accepted publickey for core from 139.178.68.195 port 36842 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:12:46.495750 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:46.499360 systemd-logind[1514]: New session 3 of user core. Jul 11 00:12:46.506224 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:12:46.566143 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.68.195:36848.service - OpenSSH per-connection server daemon (139.178.68.195:36848). Jul 11 00:12:46.591141 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 36848 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:12:46.592454 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:46.597401 systemd-logind[1514]: New session 4 of user core. Jul 11 00:12:46.603163 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:12:46.651523 sshd[1760]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:46.662507 systemd[1]: sshd@1-139.178.70.105:22-139.178.68.195:36848.service: Deactivated successfully. Jul 11 00:12:46.663297 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:12:46.664098 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:12:46.664807 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.68.195:36864.service - OpenSSH per-connection server daemon (139.178.68.195:36864). Jul 11 00:12:46.667166 systemd-logind[1514]: Removed session 4. Jul 11 00:12:46.694723 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 36864 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:12:46.695536 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:46.698285 systemd-logind[1514]: New session 5 of user core. Jul 11 00:12:46.705088 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:12:46.750923 sshd[1767]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:46.760524 systemd[1]: sshd@2-139.178.70.105:22-139.178.68.195:36864.service: Deactivated successfully. Jul 11 00:12:46.761386 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:12:46.762143 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:12:46.763132 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.68.195:36870.service - OpenSSH per-connection server daemon (139.178.68.195:36870). Jul 11 00:12:46.763786 systemd-logind[1514]: Removed session 5. Jul 11 00:12:46.791831 sshd[1774]: Accepted publickey for core from 139.178.68.195 port 36870 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:12:46.792523 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:46.795200 systemd-logind[1514]: New session 6 of user core. Jul 11 00:12:46.801082 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:12:46.849211 sshd[1774]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:46.853360 systemd[1]: sshd@3-139.178.70.105:22-139.178.68.195:36870.service: Deactivated successfully. Jul 11 00:12:46.854121 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:12:46.854876 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:12:46.855570 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.68.195:36876.service - OpenSSH per-connection server daemon (139.178.68.195:36876). Jul 11 00:12:46.857173 systemd-logind[1514]: Removed session 6. Jul 11 00:12:46.894095 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 36876 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:12:46.894824 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:46.897893 systemd-logind[1514]: New session 7 of user core. Jul 11 00:12:46.900084 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:12:46.954075 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:12:46.954229 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:12:46.967658 sudo[1784]: pam_unix(sudo:session): session closed for user root Jul 11 00:12:46.969428 sshd[1781]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:46.976891 systemd[1]: sshd@4-139.178.70.105:22-139.178.68.195:36876.service: Deactivated successfully. Jul 11 00:12:46.977804 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:12:46.978536 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:12:46.984198 systemd[1]: Started sshd@5-139.178.70.105:22-139.178.68.195:36882.service - OpenSSH per-connection server daemon (139.178.68.195:36882). Jul 11 00:12:46.984881 systemd-logind[1514]: Removed session 7. Jul 11 00:12:47.011443 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 36882 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:12:47.012197 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:47.014780 systemd-logind[1514]: New session 8 of user core. Jul 11 00:12:47.022105 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:12:47.071979 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:12:47.072422 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:12:47.074781 sudo[1793]: pam_unix(sudo:session): session closed for user root Jul 11 00:12:47.078454 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:12:47.078824 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:12:47.088186 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:12:47.089861 auditctl[1796]: No rules Jul 11 00:12:47.090109 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:12:47.090257 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:12:47.091956 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:12:47.110475 augenrules[1814]: No rules Jul 11 00:12:47.111174 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:12:47.111925 sudo[1792]: pam_unix(sudo:session): session closed for user root Jul 11 00:12:47.112798 sshd[1789]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:47.117589 systemd[1]: sshd@5-139.178.70.105:22-139.178.68.195:36882.service: Deactivated successfully. Jul 11 00:12:47.118586 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:12:47.119064 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:12:47.124175 systemd[1]: Started sshd@6-139.178.70.105:22-139.178.68.195:36888.service - OpenSSH per-connection server daemon (139.178.68.195:36888). Jul 11 00:12:47.125192 systemd-logind[1514]: Removed session 8. Jul 11 00:12:47.151046 sshd[1822]: Accepted publickey for core from 139.178.68.195 port 36888 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:12:47.151876 sshd[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:47.154801 systemd-logind[1514]: New session 9 of user core. Jul 11 00:12:47.166218 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:12:47.214667 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:12:47.214880 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:12:47.485211 (dockerd)[1841]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:12:47.485286 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:12:47.731035 dockerd[1841]: time="2025-07-11T00:12:47.728819399Z" level=info msg="Starting up" Jul 11 00:12:47.789294 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3875876104-merged.mount: Deactivated successfully. Jul 11 00:12:47.804640 dockerd[1841]: time="2025-07-11T00:12:47.804612854Z" level=info msg="Loading containers: start." Jul 11 00:12:47.862022 kernel: Initializing XFRM netlink socket Jul 11 00:12:47.904891 systemd-networkd[1443]: docker0: Link UP Jul 11 00:12:47.911718 dockerd[1841]: time="2025-07-11T00:12:47.911703190Z" level=info msg="Loading containers: done." Jul 11 00:12:47.922405 dockerd[1841]: time="2025-07-11T00:12:47.922376926Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:12:47.922510 dockerd[1841]: time="2025-07-11T00:12:47.922445274Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:12:47.922510 dockerd[1841]: time="2025-07-11T00:12:47.922504604Z" level=info msg="Daemon has completed initialization" Jul 11 00:12:47.934710 dockerd[1841]: time="2025-07-11T00:12:47.934686132Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:12:47.934773 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:12:48.786380 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1239545909-merged.mount: Deactivated successfully. Jul 11 00:12:48.863116 containerd[1532]: time="2025-07-11T00:12:48.863044353Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 11 00:12:49.427722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459941582.mount: Deactivated successfully. Jul 11 00:12:50.278759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 11 00:12:50.285176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:12:50.367137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:12:50.370020 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:12:50.410805 kubelet[2045]: E0711 00:12:50.410778 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:12:50.412620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:12:50.412722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:12:50.694863 containerd[1532]: time="2025-07-11T00:12:50.694773318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:50.703893 containerd[1532]: time="2025-07-11T00:12:50.703866519Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 11 00:12:50.719778 containerd[1532]: time="2025-07-11T00:12:50.719745141Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:50.729437 containerd[1532]: time="2025-07-11T00:12:50.729396333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:50.730155 containerd[1532]: time="2025-07-11T00:12:50.730134431Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.867067063s" Jul 11 00:12:50.730321 containerd[1532]: time="2025-07-11T00:12:50.730207045Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 11 00:12:50.730816 containerd[1532]: time="2025-07-11T00:12:50.730612961Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 11 00:12:53.522972 containerd[1532]: time="2025-07-11T00:12:53.522800285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:53.532688 containerd[1532]: time="2025-07-11T00:12:53.532657475Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 11 00:12:53.541107 containerd[1532]: time="2025-07-11T00:12:53.541079426Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:53.547153 containerd[1532]: time="2025-07-11T00:12:53.547116989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:53.547624 containerd[1532]: time="2025-07-11T00:12:53.547442434Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.816811005s" Jul 11 00:12:53.547624 containerd[1532]: time="2025-07-11T00:12:53.547464053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 11 00:12:53.548182 containerd[1532]: time="2025-07-11T00:12:53.548025455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 11 00:12:55.070022 containerd[1532]: time="2025-07-11T00:12:55.069982428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:55.079675 containerd[1532]: time="2025-07-11T00:12:55.079629981Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 11 00:12:55.084520 containerd[1532]: time="2025-07-11T00:12:55.084468379Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:55.096423 containerd[1532]: time="2025-07-11T00:12:55.096400980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:55.097184 containerd[1532]: time="2025-07-11T00:12:55.096935996Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.548894741s" Jul 11 00:12:55.097184 containerd[1532]: time="2025-07-11T00:12:55.096953698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 11 00:12:55.097330 containerd[1532]: time="2025-07-11T00:12:55.097319938Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 00:12:56.249790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273369018.mount: Deactivated successfully. Jul 11 00:12:56.660598 containerd[1532]: time="2025-07-11T00:12:56.660557050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:56.667566 containerd[1532]: time="2025-07-11T00:12:56.667520847Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 11 00:12:56.677582 containerd[1532]: time="2025-07-11T00:12:56.677542862Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:56.684929 containerd[1532]: time="2025-07-11T00:12:56.684891737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:56.685540 containerd[1532]: time="2025-07-11T00:12:56.685244552Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.587878447s" Jul 11 00:12:56.685540 containerd[1532]: time="2025-07-11T00:12:56.685264006Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 11 00:12:56.685540 containerd[1532]: time="2025-07-11T00:12:56.685526938Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:12:57.399222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757870641.mount: Deactivated successfully. Jul 11 00:12:58.555041 containerd[1532]: time="2025-07-11T00:12:58.554867833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:58.563569 containerd[1532]: time="2025-07-11T00:12:58.563529698Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 00:12:58.573465 containerd[1532]: time="2025-07-11T00:12:58.573422622Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:58.586910 containerd[1532]: time="2025-07-11T00:12:58.586841385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:58.588167 containerd[1532]: time="2025-07-11T00:12:58.587955256Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.90240836s" Jul 11 00:12:58.588167 containerd[1532]: time="2025-07-11T00:12:58.587991060Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 00:12:58.588744 containerd[1532]: time="2025-07-11T00:12:58.588722105Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:12:59.441347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263157650.mount: Deactivated successfully. Jul 11 00:12:59.442407 containerd[1532]: time="2025-07-11T00:12:59.441987666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:59.442716 containerd[1532]: time="2025-07-11T00:12:59.442694196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:12:59.443334 containerd[1532]: time="2025-07-11T00:12:59.443155991Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:59.444558 containerd[1532]: time="2025-07-11T00:12:59.444525015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:59.445176 containerd[1532]: time="2025-07-11T00:12:59.445102072Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 856.356374ms" Jul 11 00:12:59.445176 containerd[1532]: time="2025-07-11T00:12:59.445120272Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:12:59.445636 containerd[1532]: time="2025-07-11T00:12:59.445589242Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 11 00:13:00.309074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634564062.mount: Deactivated successfully. Jul 11 00:13:00.529088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 11 00:13:00.538256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:01.552585 update_engine[1517]: I20250711 00:13:01.552137 1517 update_attempter.cc:509] Updating boot flags... Jul 11 00:13:01.919351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:01.929548 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:13:01.955789 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2159) Jul 11 00:13:02.145172 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2159) Jul 11 00:13:02.185710 kubelet[2153]: E0711 00:13:02.185621 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:13:02.187583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:13:02.187675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:13:04.142106 containerd[1532]: time="2025-07-11T00:13:04.142037521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:04.153039 containerd[1532]: time="2025-07-11T00:13:04.152987571Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 11 00:13:04.180576 containerd[1532]: time="2025-07-11T00:13:04.180544286Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:04.183268 containerd[1532]: time="2025-07-11T00:13:04.183244715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:04.184478 containerd[1532]: time="2025-07-11T00:13:04.184449112Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.738831824s" Jul 11 00:13:04.184522 containerd[1532]: time="2025-07-11T00:13:04.184488477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 11 00:13:06.226353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:06.234171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:06.251086 systemd[1]: Reloading requested from client PID 2240 ('systemctl') (unit session-9.scope)... Jul 11 00:13:06.251179 systemd[1]: Reloading... Jul 11 00:13:06.315024 zram_generator::config[2277]: No configuration found. Jul 11 00:13:06.366551 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 11 00:13:06.381461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:13:06.424786 systemd[1]: Reloading finished in 173 ms. Jul 11 00:13:06.448442 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:13:06.448497 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:13:06.448714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:06.452149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:07.057199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:07.061186 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:13:07.169391 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:13:07.169391 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:13:07.169391 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:13:07.169640 kubelet[2345]: I0711 00:13:07.169432 2345 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:13:07.532747 kubelet[2345]: I0711 00:13:07.532726 2345 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:13:07.532747 kubelet[2345]: I0711 00:13:07.532744 2345 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:13:07.532931 kubelet[2345]: I0711 00:13:07.532919 2345 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:13:07.563595 kubelet[2345]: I0711 00:13:07.563574 2345 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:13:07.565298 kubelet[2345]: E0711 00:13:07.565249 2345 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:07.575811 kubelet[2345]: E0711 00:13:07.575792 2345 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:13:07.575811 kubelet[2345]: I0711 00:13:07.575811 2345 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:13:07.582451 kubelet[2345]: I0711 00:13:07.582391 2345 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:13:07.586194 kubelet[2345]: I0711 00:13:07.586168 2345 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:13:07.586306 kubelet[2345]: I0711 00:13:07.586192 2345 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:13:07.587892 kubelet[2345]: I0711 00:13:07.587877 2345 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:13:07.587892 kubelet[2345]: I0711 00:13:07.587890 2345 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:13:07.588742 kubelet[2345]: I0711 00:13:07.588728 2345 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:13:07.592336 kubelet[2345]: I0711 00:13:07.592325 2345 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:13:07.592366 kubelet[2345]: I0711 00:13:07.592343 2345 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:13:07.592366 kubelet[2345]: I0711 00:13:07.592353 2345 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:13:07.592366 kubelet[2345]: I0711 00:13:07.592360 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:13:07.596568 kubelet[2345]: W0711 00:13:07.596258 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 11 00:13:07.596568 kubelet[2345]: E0711 00:13:07.596300 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:07.596568 kubelet[2345]: W0711 00:13:07.596511 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 11 00:13:07.596568 kubelet[2345]: E0711 00:13:07.596539 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:07.598020 kubelet[2345]: I0711 00:13:07.597951 2345 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:13:07.600916 kubelet[2345]: I0711 00:13:07.600368 2345 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:13:07.600916 kubelet[2345]: W0711 00:13:07.600419 2345 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:13:07.603756 kubelet[2345]: I0711 00:13:07.603122 2345 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:13:07.603756 kubelet[2345]: I0711 00:13:07.603155 2345 server.go:1287] "Started kubelet" Jul 11 00:13:07.605125 kubelet[2345]: I0711 00:13:07.604877 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:13:07.605125 kubelet[2345]: I0711 00:13:07.605059 2345 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:13:07.609658 kubelet[2345]: I0711 00:13:07.609207 2345 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:13:07.609658 kubelet[2345]: I0711 00:13:07.609567 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:13:07.610638 kubelet[2345]: I0711 00:13:07.610625 2345 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:13:07.613543 kubelet[2345]: I0711 00:13:07.613532 2345 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:13:07.614667 kubelet[2345]: E0711 00:13:07.611224 2345 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a0e8b16275a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:13:07.603138394 +0000 UTC m=+0.539970041,LastTimestamp:2025-07-11 00:13:07.603138394 +0000 UTC m=+0.539970041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:13:07.616036 kubelet[2345]: I0711 00:13:07.615991 2345 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:13:07.616980 kubelet[2345]: E0711 00:13:07.616112 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:07.616980 kubelet[2345]: E0711 00:13:07.616631 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Jul 11 00:13:07.616980 kubelet[2345]: I0711 00:13:07.616863 2345 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:13:07.618358 kubelet[2345]: I0711 00:13:07.618345 2345 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:13:07.618396 kubelet[2345]: I0711 00:13:07.618380 2345 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:13:07.618784 kubelet[2345]: W0711 00:13:07.618761 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 11 00:13:07.618825 kubelet[2345]: E0711 00:13:07.618789 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:07.619218 kubelet[2345]: I0711 00:13:07.619202 2345 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:13:07.619218 kubelet[2345]: I0711 00:13:07.619213 2345 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:13:07.620161 kubelet[2345]: E0711 00:13:07.620081 2345 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:13:07.625398 kubelet[2345]: I0711 00:13:07.625371 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:13:07.626355 kubelet[2345]: I0711 00:13:07.626130 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:13:07.626355 kubelet[2345]: I0711 00:13:07.626150 2345 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:13:07.626355 kubelet[2345]: I0711 00:13:07.626167 2345 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:13:07.626355 kubelet[2345]: I0711 00:13:07.626174 2345 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:13:07.626355 kubelet[2345]: E0711 00:13:07.626215 2345 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:13:07.630277 kubelet[2345]: W0711 00:13:07.630256 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 11 00:13:07.630344 kubelet[2345]: E0711 00:13:07.630330 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:07.647367 kubelet[2345]: I0711 00:13:07.647131 2345 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:13:07.647367 kubelet[2345]: I0711 00:13:07.647140 2345 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:13:07.647367 kubelet[2345]: I0711 00:13:07.647148 2345 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:13:07.650687 kubelet[2345]: I0711 00:13:07.650547 2345 policy_none.go:49] "None policy: Start" Jul 11 00:13:07.650687 kubelet[2345]: I0711 00:13:07.650557 2345 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:13:07.650687 kubelet[2345]: I0711 00:13:07.650563 2345 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:13:07.653977 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:13:07.662325 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:13:07.665127 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:13:07.673502 kubelet[2345]: I0711 00:13:07.673489 2345 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:13:07.673945 kubelet[2345]: I0711 00:13:07.673605 2345 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:13:07.673945 kubelet[2345]: I0711 00:13:07.673731 2345 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:13:07.675248 kubelet[2345]: E0711 00:13:07.675173 2345 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:13:07.675248 kubelet[2345]: E0711 00:13:07.675211 2345 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:13:07.675441 kubelet[2345]: I0711 00:13:07.675426 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:13:07.732838 systemd[1]: Created slice kubepods-burstable-pod97e78437c8643a71d346b5c395c84680.slice - libcontainer container kubepods-burstable-pod97e78437c8643a71d346b5c395c84680.slice. Jul 11 00:13:07.745607 kubelet[2345]: E0711 00:13:07.745590 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:07.746109 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 11 00:13:07.751876 kubelet[2345]: E0711 00:13:07.751864 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:07.753620 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 11 00:13:07.754885 kubelet[2345]: E0711 00:13:07.754775 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:07.774626 kubelet[2345]: I0711 00:13:07.774614 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:07.774951 kubelet[2345]: E0711 00:13:07.774931 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 11 00:13:07.817291 kubelet[2345]: E0711 00:13:07.817216 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Jul 11 00:13:07.819597 kubelet[2345]: I0711 00:13:07.819554 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97e78437c8643a71d346b5c395c84680-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97e78437c8643a71d346b5c395c84680\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:07.819658 kubelet[2345]: I0711 00:13:07.819616 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:07.819658 kubelet[2345]: I0711 00:13:07.819634 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:07.819717 kubelet[2345]: I0711 00:13:07.819659 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:07.819717 kubelet[2345]: I0711 00:13:07.819693 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:07.819717 kubelet[2345]: I0711 00:13:07.819709 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97e78437c8643a71d346b5c395c84680-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97e78437c8643a71d346b5c395c84680\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:07.819797 kubelet[2345]: I0711 00:13:07.819720 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:07.819797 kubelet[2345]: I0711 00:13:07.819731 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:07.819797 kubelet[2345]: I0711 00:13:07.819741 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97e78437c8643a71d346b5c395c84680-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97e78437c8643a71d346b5c395c84680\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:07.976035 kubelet[2345]: I0711 00:13:07.975992 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:07.976294 kubelet[2345]: E0711 00:13:07.976273 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 11 00:13:08.046470 containerd[1532]: time="2025-07-11T00:13:08.046438204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97e78437c8643a71d346b5c395c84680,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:08.058384 containerd[1532]: time="2025-07-11T00:13:08.058353555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:08.058548 containerd[1532]: time="2025-07-11T00:13:08.058355826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:08.218343 kubelet[2345]: E0711 00:13:08.218264 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Jul 11 00:13:08.377925 kubelet[2345]: I0711 00:13:08.377659 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:08.377925 kubelet[2345]: E0711 00:13:08.377880 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 11 00:13:08.484078 kubelet[2345]: W0711 00:13:08.483943 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 11 00:13:08.484078 kubelet[2345]: E0711 00:13:08.483994 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:08.675431 kubelet[2345]: W0711 00:13:08.675345 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 11 00:13:08.675431 kubelet[2345]: E0711 00:13:08.675415 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:08.683355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770409302.mount: Deactivated successfully. Jul 11 00:13:08.685789 containerd[1532]: time="2025-07-11T00:13:08.685752659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:08.686373 containerd[1532]: time="2025-07-11T00:13:08.686348185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 11 00:13:08.687321 containerd[1532]: time="2025-07-11T00:13:08.687301009Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:08.687744 containerd[1532]: time="2025-07-11T00:13:08.687723757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:13:08.688183 containerd[1532]: time="2025-07-11T00:13:08.688111834Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:08.688714 containerd[1532]: time="2025-07-11T00:13:08.688702196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:08.689294 containerd[1532]: time="2025-07-11T00:13:08.688981367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:13:08.690877 containerd[1532]: time="2025-07-11T00:13:08.690572081Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 632.169711ms" Jul 11 00:13:08.691377 containerd[1532]: time="2025-07-11T00:13:08.691360129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:08.692232 containerd[1532]: time="2025-07-11T00:13:08.692178106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.686012ms" Jul 11 00:13:08.693937 containerd[1532]: time="2025-07-11T00:13:08.693805292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 647.304622ms" Jul 11 00:13:08.891533 containerd[1532]: time="2025-07-11T00:13:08.873279471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:08.891533 containerd[1532]: time="2025-07-11T00:13:08.873306649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:08.891533 containerd[1532]: time="2025-07-11T00:13:08.873314114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:08.891533 containerd[1532]: time="2025-07-11T00:13:08.873349770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:08.891533 containerd[1532]: time="2025-07-11T00:13:08.870831615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:08.891533 containerd[1532]: time="2025-07-11T00:13:08.870869735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:08.891533 containerd[1532]: time="2025-07-11T00:13:08.870879440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:08.891533 containerd[1532]: time="2025-07-11T00:13:08.870916924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:08.891972 containerd[1532]: time="2025-07-11T00:13:08.868280048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:08.891972 containerd[1532]: time="2025-07-11T00:13:08.868337158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:08.891972 containerd[1532]: time="2025-07-11T00:13:08.868362514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:08.891972 containerd[1532]: time="2025-07-11T00:13:08.868853840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:08.917283 systemd[1]: Started cri-containerd-2bb93312fab95aae172dc4cacaf0f1ea1aa86071faffa4490fcfdb98dd74bc3f.scope - libcontainer container 2bb93312fab95aae172dc4cacaf0f1ea1aa86071faffa4490fcfdb98dd74bc3f. Jul 11 00:13:08.918516 systemd[1]: Started cri-containerd-8fc6d3763374468efc98a814df7cbdfc1e80b8dedcea3bc66518eae8f2ed269a.scope - libcontainer container 8fc6d3763374468efc98a814df7cbdfc1e80b8dedcea3bc66518eae8f2ed269a. Jul 11 00:13:08.919734 systemd[1]: Started cri-containerd-f53005d2b2cd85d77b126294c55c64fea76bff2025ba9fea6b2a4985906a387e.scope - libcontainer container f53005d2b2cd85d77b126294c55c64fea76bff2025ba9fea6b2a4985906a387e. Jul 11 00:13:08.964127 containerd[1532]: time="2025-07-11T00:13:08.963937690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97e78437c8643a71d346b5c395c84680,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fc6d3763374468efc98a814df7cbdfc1e80b8dedcea3bc66518eae8f2ed269a\"" Jul 11 00:13:08.976986 containerd[1532]: time="2025-07-11T00:13:08.976963037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bb93312fab95aae172dc4cacaf0f1ea1aa86071faffa4490fcfdb98dd74bc3f\"" Jul 11 00:13:08.988075 containerd[1532]: time="2025-07-11T00:13:08.988057428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f53005d2b2cd85d77b126294c55c64fea76bff2025ba9fea6b2a4985906a387e\"" Jul 11 00:13:08.989478 containerd[1532]: time="2025-07-11T00:13:08.989458872Z" level=info msg="CreateContainer within sandbox \"2bb93312fab95aae172dc4cacaf0f1ea1aa86071faffa4490fcfdb98dd74bc3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:13:08.990025 containerd[1532]: time="2025-07-11T00:13:08.989963465Z" level=info msg="CreateContainer within sandbox \"8fc6d3763374468efc98a814df7cbdfc1e80b8dedcea3bc66518eae8f2ed269a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:13:08.990178 containerd[1532]: time="2025-07-11T00:13:08.990166360Z" level=info msg="CreateContainer within sandbox \"f53005d2b2cd85d77b126294c55c64fea76bff2025ba9fea6b2a4985906a387e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:13:09.018701 kubelet[2345]: E0711 00:13:09.018684 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Jul 11 00:13:09.081423 kubelet[2345]: W0711 00:13:09.081388 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 11 00:13:09.081475 kubelet[2345]: E0711 00:13:09.081430 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:09.162044 kubelet[2345]: W0711 00:13:09.158577 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 11 00:13:09.162044 kubelet[2345]: E0711 00:13:09.158618 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:09.177073 containerd[1532]: time="2025-07-11T00:13:09.176991379Z" level=info msg="CreateContainer within sandbox \"f53005d2b2cd85d77b126294c55c64fea76bff2025ba9fea6b2a4985906a387e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"781bd54680601dd6ed09d369c57fb81dbf55a3f4ab31522574fec7a2f334ee1e\"" Jul 11 00:13:09.177402 containerd[1532]: time="2025-07-11T00:13:09.177385860Z" level=info msg="StartContainer for \"781bd54680601dd6ed09d369c57fb81dbf55a3f4ab31522574fec7a2f334ee1e\"" Jul 11 00:13:09.179494 kubelet[2345]: I0711 00:13:09.179293 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:09.179494 kubelet[2345]: E0711 00:13:09.179479 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 11 00:13:09.182453 containerd[1532]: time="2025-07-11T00:13:09.182427320Z" level=info msg="CreateContainer within sandbox \"8fc6d3763374468efc98a814df7cbdfc1e80b8dedcea3bc66518eae8f2ed269a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eacc7b2c00939013e36900271f72cc228d7475c5cd39460314a6cbdfb774663e\"" Jul 11 00:13:09.183250 containerd[1532]: time="2025-07-11T00:13:09.182705200Z" level=info msg="StartContainer for \"eacc7b2c00939013e36900271f72cc228d7475c5cd39460314a6cbdfb774663e\"" Jul 11 00:13:09.183662 containerd[1532]: time="2025-07-11T00:13:09.183642242Z" level=info msg="CreateContainer within sandbox \"2bb93312fab95aae172dc4cacaf0f1ea1aa86071faffa4490fcfdb98dd74bc3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"196c1471146e14598658882a662dd39c284945a668a10de02e02baaff3eaef57\"" Jul 11 00:13:09.183868 containerd[1532]: time="2025-07-11T00:13:09.183857781Z" level=info msg="StartContainer for \"196c1471146e14598658882a662dd39c284945a668a10de02e02baaff3eaef57\"" Jul 11 00:13:09.200133 systemd[1]: Started cri-containerd-781bd54680601dd6ed09d369c57fb81dbf55a3f4ab31522574fec7a2f334ee1e.scope - libcontainer container 781bd54680601dd6ed09d369c57fb81dbf55a3f4ab31522574fec7a2f334ee1e. Jul 11 00:13:09.208135 systemd[1]: Started cri-containerd-eacc7b2c00939013e36900271f72cc228d7475c5cd39460314a6cbdfb774663e.scope - libcontainer container eacc7b2c00939013e36900271f72cc228d7475c5cd39460314a6cbdfb774663e. Jul 11 00:13:09.211225 systemd[1]: Started cri-containerd-196c1471146e14598658882a662dd39c284945a668a10de02e02baaff3eaef57.scope - libcontainer container 196c1471146e14598658882a662dd39c284945a668a10de02e02baaff3eaef57. Jul 11 00:13:09.245297 containerd[1532]: time="2025-07-11T00:13:09.245195909Z" level=info msg="StartContainer for \"781bd54680601dd6ed09d369c57fb81dbf55a3f4ab31522574fec7a2f334ee1e\" returns successfully" Jul 11 00:13:09.259804 containerd[1532]: time="2025-07-11T00:13:09.259780186Z" level=info msg="StartContainer for \"196c1471146e14598658882a662dd39c284945a668a10de02e02baaff3eaef57\" returns successfully" Jul 11 00:13:09.259910 containerd[1532]: time="2025-07-11T00:13:09.259897975Z" level=info msg="StartContainer for \"eacc7b2c00939013e36900271f72cc228d7475c5cd39460314a6cbdfb774663e\" returns successfully" Jul 11 00:13:09.648638 kubelet[2345]: E0711 00:13:09.648486 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:09.649078 kubelet[2345]: E0711 00:13:09.648943 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:09.650510 kubelet[2345]: E0711 00:13:09.650451 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:09.678550 kubelet[2345]: E0711 00:13:09.678530 2345 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:10.620633 kubelet[2345]: E0711 00:13:10.620602 2345 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:13:10.657590 kubelet[2345]: E0711 00:13:10.657379 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:10.657590 kubelet[2345]: E0711 00:13:10.657385 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:10.780552 kubelet[2345]: I0711 00:13:10.780534 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:10.789983 kubelet[2345]: I0711 00:13:10.789881 2345 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:13:10.789983 kubelet[2345]: E0711 00:13:10.789909 2345 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:13:10.796507 kubelet[2345]: E0711 00:13:10.796484 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:10.897430 kubelet[2345]: E0711 00:13:10.897178 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:10.998076 kubelet[2345]: E0711 00:13:10.998035 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:11.098217 kubelet[2345]: E0711 00:13:11.098152 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:11.198714 kubelet[2345]: E0711 00:13:11.198603 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:11.299182 kubelet[2345]: E0711 00:13:11.299153 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:11.399788 kubelet[2345]: E0711 00:13:11.399756 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:11.517493 kubelet[2345]: I0711 00:13:11.517190 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:11.520501 kubelet[2345]: E0711 00:13:11.520478 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:11.520501 kubelet[2345]: I0711 00:13:11.520497 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:11.521448 kubelet[2345]: E0711 00:13:11.521428 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:11.521448 kubelet[2345]: I0711 00:13:11.521442 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:11.522403 kubelet[2345]: E0711 00:13:11.522388 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:11.599176 kubelet[2345]: I0711 00:13:11.599023 2345 apiserver.go:52] "Watching apiserver" Jul 11 00:13:11.618829 kubelet[2345]: I0711 00:13:11.618796 2345 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:13:11.655027 kubelet[2345]: I0711 00:13:11.654878 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:11.655027 kubelet[2345]: I0711 00:13:11.655028 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:12.579947 systemd[1]: Reloading requested from client PID 2619 ('systemctl') (unit session-9.scope)... Jul 11 00:13:12.579960 systemd[1]: Reloading... Jul 11 00:13:12.635026 zram_generator::config[2659]: No configuration found. Jul 11 00:13:12.699735 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 11 00:13:12.715196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:13:12.767284 systemd[1]: Reloading finished in 187 ms. Jul 11 00:13:12.794498 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:12.810619 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:13:12.810768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:12.817204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:13.183717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:13.187752 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:13:13.275216 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:13:13.275216 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:13:13.275216 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:13:13.300897 kubelet[2724]: I0711 00:13:13.300846 2724 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:13:13.308122 kubelet[2724]: I0711 00:13:13.308107 2724 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:13:13.309566 kubelet[2724]: I0711 00:13:13.308246 2724 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:13:13.309566 kubelet[2724]: I0711 00:13:13.308586 2724 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:13:13.309802 kubelet[2724]: I0711 00:13:13.309791 2724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:13:13.311087 kubelet[2724]: I0711 00:13:13.311076 2724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:13:13.312923 kubelet[2724]: E0711 00:13:13.312906 2724 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:13:13.312923 kubelet[2724]: I0711 00:13:13.312924 2724 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:13:13.314780 kubelet[2724]: I0711 00:13:13.314768 2724 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:13:13.314911 kubelet[2724]: I0711 00:13:13.314892 2724 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:13:13.315020 kubelet[2724]: I0711 00:13:13.314913 2724 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:13:13.315087 kubelet[2724]: I0711 00:13:13.315027 2724 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:13:13.315087 kubelet[2724]: I0711 00:13:13.315033 2724 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:13:13.315087 kubelet[2724]: I0711 00:13:13.315061 2724 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:13:13.315201 kubelet[2724]: I0711 00:13:13.315192 2724 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:13:13.315222 kubelet[2724]: I0711 00:13:13.315208 2724 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:13:13.315222 kubelet[2724]: I0711 00:13:13.315220 2724 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:13:13.316106 kubelet[2724]: I0711 00:13:13.315226 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:13:13.318119 kubelet[2724]: I0711 00:13:13.316734 2724 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:13:13.318119 kubelet[2724]: I0711 00:13:13.317092 2724 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:13:13.330020 kubelet[2724]: I0711 00:13:13.329542 2724 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:13:13.330020 kubelet[2724]: I0711 00:13:13.329577 2724 server.go:1287] "Started kubelet" Jul 11 00:13:13.367729 kubelet[2724]: I0711 00:13:13.367684 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:13:13.367903 kubelet[2724]: I0711 00:13:13.367886 2724 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:13:13.368560 kubelet[2724]: I0711 00:13:13.368547 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:13:13.369384 kubelet[2724]: I0711 00:13:13.369365 2724 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:13:13.397959 kubelet[2724]: I0711 00:13:13.397897 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:13:13.398520 kubelet[2724]: I0711 00:13:13.398510 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:13:13.398571 kubelet[2724]: I0711 00:13:13.398565 2724 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:13:13.398727 kubelet[2724]: I0711 00:13:13.398613 2724 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:13:13.398727 kubelet[2724]: I0711 00:13:13.398621 2724 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:13:13.398727 kubelet[2724]: E0711 00:13:13.398644 2724 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:13:13.408001 kubelet[2724]: I0711 00:13:13.407985 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:13:13.424601 kubelet[2724]: I0711 00:13:13.424078 2724 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:13:13.425238 kubelet[2724]: I0711 00:13:13.425224 2724 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:13:13.425614 kubelet[2724]: I0711 00:13:13.425603 2724 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:13:13.425961 kubelet[2724]: E0711 00:13:13.425948 2724 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:13:13.427321 kubelet[2724]: I0711 00:13:13.427304 2724 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:13:13.427984 kubelet[2724]: I0711 00:13:13.427966 2724 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:13:13.427984 kubelet[2724]: I0711 00:13:13.427975 2724 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:13:13.434025 kubelet[2724]: I0711 00:13:13.433752 2724 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:13:13.468690 kubelet[2724]: I0711 00:13:13.468592 2724 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:13:13.468690 kubelet[2724]: I0711 00:13:13.468603 2724 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:13:13.468690 kubelet[2724]: I0711 00:13:13.468612 2724 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:13:13.469283 kubelet[2724]: I0711 00:13:13.469270 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:13:13.469318 kubelet[2724]: I0711 00:13:13.469281 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:13:13.469318 kubelet[2724]: I0711 00:13:13.469294 2724 policy_none.go:49] "None policy: Start" Jul 11 00:13:13.469318 kubelet[2724]: I0711 00:13:13.469299 2724 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:13:13.469318 kubelet[2724]: I0711 00:13:13.469306 2724 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:13:13.469383 kubelet[2724]: I0711 00:13:13.469364 2724 state_mem.go:75] "Updated machine memory state" Jul 11 00:13:13.471620 kubelet[2724]: I0711 00:13:13.471606 2724 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:13:13.471870 kubelet[2724]: I0711 00:13:13.471692 2724 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:13:13.471870 kubelet[2724]: I0711 00:13:13.471701 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:13:13.471870 kubelet[2724]: I0711 00:13:13.471800 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:13:13.473010 kubelet[2724]: E0711 00:13:13.472987 2724 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:13:13.500285 kubelet[2724]: I0711 00:13:13.500051 2724 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:13.500285 kubelet[2724]: I0711 00:13:13.500147 2724 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:13.500515 kubelet[2724]: I0711 00:13:13.500505 2724 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:13.526570 kubelet[2724]: E0711 00:13:13.526479 2724 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:13.526651 kubelet[2724]: E0711 00:13:13.526633 2724 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:13.576121 kubelet[2724]: I0711 00:13:13.576093 2724 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:13.582627 kubelet[2724]: I0711 00:13:13.582121 2724 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:13:13.582627 kubelet[2724]: I0711 00:13:13.582182 2724 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:13:13.626075 kubelet[2724]: I0711 00:13:13.625912 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:13.626075 kubelet[2724]: I0711 00:13:13.625938 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:13.626075 kubelet[2724]: I0711 00:13:13.625951 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:13.626075 kubelet[2724]: I0711 00:13:13.625962 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97e78437c8643a71d346b5c395c84680-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97e78437c8643a71d346b5c395c84680\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:13.626075 kubelet[2724]: I0711 00:13:13.625972 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97e78437c8643a71d346b5c395c84680-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97e78437c8643a71d346b5c395c84680\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:13.626248 kubelet[2724]: I0711 00:13:13.625985 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:13.626248 kubelet[2724]: I0711 00:13:13.625993 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:13.626248 kubelet[2724]: I0711 00:13:13.626003 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:13.626248 kubelet[2724]: I0711 00:13:13.626029 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97e78437c8643a71d346b5c395c84680-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97e78437c8643a71d346b5c395c84680\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:14.316563 kubelet[2724]: I0711 00:13:14.316540 2724 apiserver.go:52] "Watching apiserver" Jul 11 00:13:14.325358 kubelet[2724]: I0711 00:13:14.325333 2724 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:13:14.465122 kubelet[2724]: I0711 00:13:14.465077 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.465066269 podStartE2EDuration="1.465066269s" podCreationTimestamp="2025-07-11 00:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:14.465025798 +0000 UTC m=+1.262438675" watchObservedRunningTime="2025-07-11 00:13:14.465066269 +0000 UTC m=+1.262479155" Jul 11 00:13:14.469454 kubelet[2724]: I0711 00:13:14.469384 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.46937237 podStartE2EDuration="3.46937237s" podCreationTimestamp="2025-07-11 00:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:14.469372714 +0000 UTC m=+1.266785599" watchObservedRunningTime="2025-07-11 00:13:14.46937237 +0000 UTC m=+1.266785256" Jul 11 00:13:14.491236 kubelet[2724]: I0711 00:13:14.491101 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.4910889259999998 podStartE2EDuration="3.491088926s" podCreationTimestamp="2025-07-11 00:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:14.477617573 +0000 UTC m=+1.275030454" watchObservedRunningTime="2025-07-11 00:13:14.491088926 +0000 UTC m=+1.288501813" Jul 11 00:13:17.461154 kubelet[2724]: I0711 00:13:17.461131 2724 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:13:17.461525 containerd[1532]: time="2025-07-11T00:13:17.461353738Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:13:17.461691 kubelet[2724]: I0711 00:13:17.461472 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:13:18.342265 systemd[1]: Created slice kubepods-besteffort-pod845e97b1_8363_4bf6_ac74_1da745887f97.slice - libcontainer container kubepods-besteffort-pod845e97b1_8363_4bf6_ac74_1da745887f97.slice. Jul 11 00:13:18.360352 kubelet[2724]: I0711 00:13:18.360329 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/845e97b1-8363-4bf6-ac74-1da745887f97-kube-proxy\") pod \"kube-proxy-n57hs\" (UID: \"845e97b1-8363-4bf6-ac74-1da745887f97\") " pod="kube-system/kube-proxy-n57hs" Jul 11 00:13:18.360484 kubelet[2724]: I0711 00:13:18.360472 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hphpg\" (UniqueName: \"kubernetes.io/projected/845e97b1-8363-4bf6-ac74-1da745887f97-kube-api-access-hphpg\") pod \"kube-proxy-n57hs\" (UID: \"845e97b1-8363-4bf6-ac74-1da745887f97\") " pod="kube-system/kube-proxy-n57hs" Jul 11 00:13:18.360555 kubelet[2724]: I0711 00:13:18.360527 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/845e97b1-8363-4bf6-ac74-1da745887f97-xtables-lock\") pod \"kube-proxy-n57hs\" (UID: \"845e97b1-8363-4bf6-ac74-1da745887f97\") " pod="kube-system/kube-proxy-n57hs" Jul 11 00:13:18.360826 kubelet[2724]: I0711 00:13:18.360801 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/845e97b1-8363-4bf6-ac74-1da745887f97-lib-modules\") pod \"kube-proxy-n57hs\" (UID: \"845e97b1-8363-4bf6-ac74-1da745887f97\") " pod="kube-system/kube-proxy-n57hs" Jul 11 00:13:18.583745 systemd[1]: Created slice kubepods-besteffort-pod35a89959_0bdf_4684_9a7f_c321c21642c5.slice - libcontainer container kubepods-besteffort-pod35a89959_0bdf_4684_9a7f_c321c21642c5.slice. Jul 11 00:13:18.648547 containerd[1532]: time="2025-07-11T00:13:18.648227771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n57hs,Uid:845e97b1-8363-4bf6-ac74-1da745887f97,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:18.662586 kubelet[2724]: I0711 00:13:18.662560 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/35a89959-0bdf-4684-9a7f-c321c21642c5-var-lib-calico\") pod \"tigera-operator-747864d56d-sjqqn\" (UID: \"35a89959-0bdf-4684-9a7f-c321c21642c5\") " pod="tigera-operator/tigera-operator-747864d56d-sjqqn" Jul 11 00:13:18.662902 kubelet[2724]: I0711 00:13:18.662867 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzlvf\" (UniqueName: \"kubernetes.io/projected/35a89959-0bdf-4684-9a7f-c321c21642c5-kube-api-access-zzlvf\") pod \"tigera-operator-747864d56d-sjqqn\" (UID: \"35a89959-0bdf-4684-9a7f-c321c21642c5\") " pod="tigera-operator/tigera-operator-747864d56d-sjqqn" Jul 11 00:13:18.680943 containerd[1532]: time="2025-07-11T00:13:18.680446449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:18.680943 containerd[1532]: time="2025-07-11T00:13:18.680493266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:18.680943 containerd[1532]: time="2025-07-11T00:13:18.680504469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:18.680943 containerd[1532]: time="2025-07-11T00:13:18.680575025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:18.692909 systemd[1]: run-containerd-runc-k8s.io-96a999047715def57f0e6da2e7fc2648548ccf2f53a935c25f9002454e6873fd-runc.60Rv99.mount: Deactivated successfully. Jul 11 00:13:18.701153 systemd[1]: Started cri-containerd-96a999047715def57f0e6da2e7fc2648548ccf2f53a935c25f9002454e6873fd.scope - libcontainer container 96a999047715def57f0e6da2e7fc2648548ccf2f53a935c25f9002454e6873fd. Jul 11 00:13:18.715760 containerd[1532]: time="2025-07-11T00:13:18.715737821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n57hs,Uid:845e97b1-8363-4bf6-ac74-1da745887f97,Namespace:kube-system,Attempt:0,} returns sandbox id \"96a999047715def57f0e6da2e7fc2648548ccf2f53a935c25f9002454e6873fd\"" Jul 11 00:13:18.718771 containerd[1532]: time="2025-07-11T00:13:18.718736394Z" level=info msg="CreateContainer within sandbox \"96a999047715def57f0e6da2e7fc2648548ccf2f53a935c25f9002454e6873fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:13:18.724368 containerd[1532]: time="2025-07-11T00:13:18.724340561Z" level=info msg="CreateContainer within sandbox \"96a999047715def57f0e6da2e7fc2648548ccf2f53a935c25f9002454e6873fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4fb4dd6303fd4afb01348478e916e51878c5c14953894eb0a2f611223ddd7add\"" Jul 11 00:13:18.724874 containerd[1532]: time="2025-07-11T00:13:18.724678542Z" level=info msg="StartContainer for \"4fb4dd6303fd4afb01348478e916e51878c5c14953894eb0a2f611223ddd7add\"" Jul 11 00:13:18.746135 systemd[1]: Started cri-containerd-4fb4dd6303fd4afb01348478e916e51878c5c14953894eb0a2f611223ddd7add.scope - libcontainer container 4fb4dd6303fd4afb01348478e916e51878c5c14953894eb0a2f611223ddd7add. Jul 11 00:13:18.767026 containerd[1532]: time="2025-07-11T00:13:18.766983286Z" level=info msg="StartContainer for \"4fb4dd6303fd4afb01348478e916e51878c5c14953894eb0a2f611223ddd7add\" returns successfully" Jul 11 00:13:18.887945 containerd[1532]: time="2025-07-11T00:13:18.887905463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-sjqqn,Uid:35a89959-0bdf-4684-9a7f-c321c21642c5,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:13:18.913811 containerd[1532]: time="2025-07-11T00:13:18.913375204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:18.913909 containerd[1532]: time="2025-07-11T00:13:18.913637041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:18.913909 containerd[1532]: time="2025-07-11T00:13:18.913761021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:18.914549 containerd[1532]: time="2025-07-11T00:13:18.914095557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:18.929128 systemd[1]: Started cri-containerd-2004c6731ba885eeb232b07fdebfb5ce0f20e760e373fe20736810707696a570.scope - libcontainer container 2004c6731ba885eeb232b07fdebfb5ce0f20e760e373fe20736810707696a570. Jul 11 00:13:18.955713 containerd[1532]: time="2025-07-11T00:13:18.955685475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-sjqqn,Uid:35a89959-0bdf-4684-9a7f-c321c21642c5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2004c6731ba885eeb232b07fdebfb5ce0f20e760e373fe20736810707696a570\"" Jul 11 00:13:18.957661 containerd[1532]: time="2025-07-11T00:13:18.957370247Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:13:19.472037 kubelet[2724]: I0711 00:13:19.471878 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n57hs" podStartSLOduration=1.470349634 podStartE2EDuration="1.470349634s" podCreationTimestamp="2025-07-11 00:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:19.470300472 +0000 UTC m=+6.267713354" watchObservedRunningTime="2025-07-11 00:13:19.470349634 +0000 UTC m=+6.267762514" Jul 11 00:13:20.695513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233726003.mount: Deactivated successfully. Jul 11 00:13:21.279707 containerd[1532]: time="2025-07-11T00:13:21.279185144Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:21.279707 containerd[1532]: time="2025-07-11T00:13:21.279610547Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 00:13:21.279707 containerd[1532]: time="2025-07-11T00:13:21.279680168Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:21.281093 containerd[1532]: time="2025-07-11T00:13:21.281075596Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:21.281640 containerd[1532]: time="2025-07-11T00:13:21.281622249Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.324181302s" Jul 11 00:13:21.281677 containerd[1532]: time="2025-07-11T00:13:21.281641477Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 00:13:21.283486 containerd[1532]: time="2025-07-11T00:13:21.283465173Z" level=info msg="CreateContainer within sandbox \"2004c6731ba885eeb232b07fdebfb5ce0f20e760e373fe20736810707696a570\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:13:21.292452 containerd[1532]: time="2025-07-11T00:13:21.292422623Z" level=info msg="CreateContainer within sandbox \"2004c6731ba885eeb232b07fdebfb5ce0f20e760e373fe20736810707696a570\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6bacaf9c5e4945af02fc940176271e1c4edbd0b1c688bfdecf1cd42345faa19a\"" Jul 11 00:13:21.292837 containerd[1532]: time="2025-07-11T00:13:21.292818189Z" level=info msg="StartContainer for \"6bacaf9c5e4945af02fc940176271e1c4edbd0b1c688bfdecf1cd42345faa19a\"" Jul 11 00:13:21.313154 systemd[1]: Started cri-containerd-6bacaf9c5e4945af02fc940176271e1c4edbd0b1c688bfdecf1cd42345faa19a.scope - libcontainer container 6bacaf9c5e4945af02fc940176271e1c4edbd0b1c688bfdecf1cd42345faa19a. Jul 11 00:13:21.357886 containerd[1532]: time="2025-07-11T00:13:21.357832969Z" level=info msg="StartContainer for \"6bacaf9c5e4945af02fc940176271e1c4edbd0b1c688bfdecf1cd42345faa19a\" returns successfully" Jul 11 00:13:21.484147 kubelet[2724]: I0711 00:13:21.484113 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-sjqqn" podStartSLOduration=1.158409488 podStartE2EDuration="3.48410209s" podCreationTimestamp="2025-07-11 00:13:18 +0000 UTC" firstStartedPulling="2025-07-11 00:13:18.95679812 +0000 UTC m=+5.754210998" lastFinishedPulling="2025-07-11 00:13:21.282490723 +0000 UTC m=+8.079903600" observedRunningTime="2025-07-11 00:13:21.483980429 +0000 UTC m=+8.281393316" watchObservedRunningTime="2025-07-11 00:13:21.48410209 +0000 UTC m=+8.281514971" Jul 11 00:13:26.862127 sudo[1825]: pam_unix(sudo:session): session closed for user root Jul 11 00:13:26.864701 sshd[1822]: pam_unix(sshd:session): session closed for user core Jul 11 00:13:26.867275 systemd[1]: sshd@6-139.178.70.105:22-139.178.68.195:36888.service: Deactivated successfully. Jul 11 00:13:26.869565 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:13:26.869682 systemd[1]: session-9.scope: Consumed 2.998s CPU time, 142.5M memory peak, 0B memory swap peak. Jul 11 00:13:26.870632 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:13:26.873128 systemd-logind[1514]: Removed session 9. Jul 11 00:13:29.299360 systemd[1]: Created slice kubepods-besteffort-podd6bd5652_0cd0_436c_8bce_7177370c8c2a.slice - libcontainer container kubepods-besteffort-podd6bd5652_0cd0_436c_8bce_7177370c8c2a.slice. Jul 11 00:13:29.328018 kubelet[2724]: I0711 00:13:29.327236 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6bd5652-0cd0-436c-8bce-7177370c8c2a-tigera-ca-bundle\") pod \"calico-typha-84b4c74496-jp22j\" (UID: \"d6bd5652-0cd0-436c-8bce-7177370c8c2a\") " pod="calico-system/calico-typha-84b4c74496-jp22j" Jul 11 00:13:29.328018 kubelet[2724]: I0711 00:13:29.327292 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d6bd5652-0cd0-436c-8bce-7177370c8c2a-typha-certs\") pod \"calico-typha-84b4c74496-jp22j\" (UID: \"d6bd5652-0cd0-436c-8bce-7177370c8c2a\") " pod="calico-system/calico-typha-84b4c74496-jp22j" Jul 11 00:13:29.328018 kubelet[2724]: I0711 00:13:29.327309 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw9gh\" (UniqueName: \"kubernetes.io/projected/d6bd5652-0cd0-436c-8bce-7177370c8c2a-kube-api-access-zw9gh\") pod \"calico-typha-84b4c74496-jp22j\" (UID: \"d6bd5652-0cd0-436c-8bce-7177370c8c2a\") " pod="calico-system/calico-typha-84b4c74496-jp22j" Jul 11 00:13:29.507668 systemd[1]: Created slice kubepods-besteffort-pod5ae456d2_2fa2_41d5_851b_811ec77c085b.slice - libcontainer container kubepods-besteffort-pod5ae456d2_2fa2_41d5_851b_811ec77c085b.slice. Jul 11 00:13:29.528420 kubelet[2724]: I0711 00:13:29.528152 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ae456d2-2fa2-41d5-851b-811ec77c085b-xtables-lock\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528420 kubelet[2724]: I0711 00:13:29.528188 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5ae456d2-2fa2-41d5-851b-811ec77c085b-flexvol-driver-host\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528420 kubelet[2724]: I0711 00:13:29.528214 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5ae456d2-2fa2-41d5-851b-811ec77c085b-node-certs\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528420 kubelet[2724]: I0711 00:13:29.528228 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae456d2-2fa2-41d5-851b-811ec77c085b-tigera-ca-bundle\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528420 kubelet[2724]: I0711 00:13:29.528239 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5ae456d2-2fa2-41d5-851b-811ec77c085b-cni-log-dir\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528675 kubelet[2724]: I0711 00:13:29.528249 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ae456d2-2fa2-41d5-851b-811ec77c085b-cni-net-dir\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528675 kubelet[2724]: I0711 00:13:29.528259 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ae456d2-2fa2-41d5-851b-811ec77c085b-lib-modules\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528675 kubelet[2724]: I0711 00:13:29.528270 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb5cp\" (UniqueName: \"kubernetes.io/projected/5ae456d2-2fa2-41d5-851b-811ec77c085b-kube-api-access-lb5cp\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528675 kubelet[2724]: I0711 00:13:29.528284 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ae456d2-2fa2-41d5-851b-811ec77c085b-policysync\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528675 kubelet[2724]: I0711 00:13:29.528298 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ae456d2-2fa2-41d5-851b-811ec77c085b-var-lib-calico\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528797 kubelet[2724]: I0711 00:13:29.528314 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ae456d2-2fa2-41d5-851b-811ec77c085b-cni-bin-dir\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.528797 kubelet[2724]: I0711 00:13:29.528326 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ae456d2-2fa2-41d5-851b-811ec77c085b-var-run-calico\") pod \"calico-node-758df\" (UID: \"5ae456d2-2fa2-41d5-851b-811ec77c085b\") " pod="calico-system/calico-node-758df" Jul 11 00:13:29.623049 containerd[1532]: time="2025-07-11T00:13:29.622944294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84b4c74496-jp22j,Uid:d6bd5652-0cd0-436c-8bce-7177370c8c2a,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:29.639254 kubelet[2724]: E0711 00:13:29.637730 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.639254 kubelet[2724]: W0711 00:13:29.637753 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.639254 kubelet[2724]: E0711 00:13:29.638290 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.641061 kubelet[2724]: E0711 00:13:29.640350 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.641061 kubelet[2724]: W0711 00:13:29.640360 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.641061 kubelet[2724]: E0711 00:13:29.640372 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.642388 kubelet[2724]: E0711 00:13:29.642373 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.642388 kubelet[2724]: W0711 00:13:29.642384 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.642459 kubelet[2724]: E0711 00:13:29.642396 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.649895 kubelet[2724]: E0711 00:13:29.649784 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.649895 kubelet[2724]: W0711 00:13:29.649797 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.649895 kubelet[2724]: E0711 00:13:29.649831 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.650168 kubelet[2724]: E0711 00:13:29.649965 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.650168 kubelet[2724]: W0711 00:13:29.649971 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.650168 kubelet[2724]: E0711 00:13:29.649995 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.650168 kubelet[2724]: E0711 00:13:29.650145 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.650353 kubelet[2724]: W0711 00:13:29.650179 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.650353 kubelet[2724]: E0711 00:13:29.650191 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.650353 kubelet[2724]: E0711 00:13:29.650329 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.650353 kubelet[2724]: W0711 00:13:29.650336 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.650595 kubelet[2724]: E0711 00:13:29.650429 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.650595 kubelet[2724]: W0711 00:13:29.650457 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.650595 kubelet[2724]: E0711 00:13:29.650483 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.650595 kubelet[2724]: E0711 00:13:29.650498 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.650712 kubelet[2724]: E0711 00:13:29.650705 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.650735 kubelet[2724]: W0711 00:13:29.650712 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.650735 kubelet[2724]: E0711 00:13:29.650730 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.650972 kubelet[2724]: E0711 00:13:29.650856 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.650972 kubelet[2724]: W0711 00:13:29.650864 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.650972 kubelet[2724]: E0711 00:13:29.650873 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.651161 kubelet[2724]: E0711 00:13:29.651040 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.651161 kubelet[2724]: W0711 00:13:29.651047 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.651161 kubelet[2724]: E0711 00:13:29.651059 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.651215 kubelet[2724]: E0711 00:13:29.651170 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.651215 kubelet[2724]: W0711 00:13:29.651175 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.651215 kubelet[2724]: E0711 00:13:29.651180 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.651352 kubelet[2724]: E0711 00:13:29.651284 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.651352 kubelet[2724]: W0711 00:13:29.651290 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.651352 kubelet[2724]: E0711 00:13:29.651295 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.651430 kubelet[2724]: E0711 00:13:29.651406 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.651430 kubelet[2724]: W0711 00:13:29.651411 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.651430 kubelet[2724]: E0711 00:13:29.651416 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.651713 containerd[1532]: time="2025-07-11T00:13:29.651310009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:29.651891 containerd[1532]: time="2025-07-11T00:13:29.651865448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:29.651948 containerd[1532]: time="2025-07-11T00:13:29.651936032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:29.652125 kubelet[2724]: E0711 00:13:29.652114 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.652125 kubelet[2724]: W0711 00:13:29.652121 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.652173 kubelet[2724]: E0711 00:13:29.652129 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.652236 containerd[1532]: time="2025-07-11T00:13:29.652218691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:29.683176 systemd[1]: Started cri-containerd-50569dd5778ff66b1e235da237a9558130add56bcee6e4e5f85d4f9a698bd778.scope - libcontainer container 50569dd5778ff66b1e235da237a9558130add56bcee6e4e5f85d4f9a698bd778. Jul 11 00:13:29.717186 containerd[1532]: time="2025-07-11T00:13:29.716906191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84b4c74496-jp22j,Uid:d6bd5652-0cd0-436c-8bce-7177370c8c2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"50569dd5778ff66b1e235da237a9558130add56bcee6e4e5f85d4f9a698bd778\"" Jul 11 00:13:29.810201 containerd[1532]: time="2025-07-11T00:13:29.810177803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:13:29.811351 containerd[1532]: time="2025-07-11T00:13:29.810933150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-758df,Uid:5ae456d2-2fa2-41d5-851b-811ec77c085b,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:29.835863 containerd[1532]: time="2025-07-11T00:13:29.835483137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:29.835863 containerd[1532]: time="2025-07-11T00:13:29.835553288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:29.835863 containerd[1532]: time="2025-07-11T00:13:29.835575525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:29.835863 containerd[1532]: time="2025-07-11T00:13:29.835698937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:29.844513 kubelet[2724]: E0711 00:13:29.844479 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p695n" podUID="a987efe9-25c5-4a4f-8880-f0e8c56f315d" Jul 11 00:13:29.849000 kubelet[2724]: E0711 00:13:29.848979 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.849673 kubelet[2724]: W0711 00:13:29.849123 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.849673 kubelet[2724]: E0711 00:13:29.849141 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.849827 kubelet[2724]: E0711 00:13:29.849759 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.849827 kubelet[2724]: W0711 00:13:29.849767 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.849827 kubelet[2724]: E0711 00:13:29.849777 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.850743 kubelet[2724]: E0711 00:13:29.849992 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.850743 kubelet[2724]: W0711 00:13:29.850001 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.850743 kubelet[2724]: E0711 00:13:29.850706 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.850884 kubelet[2724]: E0711 00:13:29.850872 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.850884 kubelet[2724]: W0711 00:13:29.850882 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.850934 kubelet[2724]: E0711 00:13:29.850889 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.851002 kubelet[2724]: E0711 00:13:29.850988 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.851002 kubelet[2724]: W0711 00:13:29.850996 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.851002 kubelet[2724]: E0711 00:13:29.851002 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.851123 kubelet[2724]: E0711 00:13:29.851098 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.851123 kubelet[2724]: W0711 00:13:29.851113 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.851123 kubelet[2724]: E0711 00:13:29.851119 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.852400 kubelet[2724]: E0711 00:13:29.851201 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.852400 kubelet[2724]: W0711 00:13:29.851207 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.852400 kubelet[2724]: E0711 00:13:29.851213 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.852400 kubelet[2724]: E0711 00:13:29.851293 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.852400 kubelet[2724]: W0711 00:13:29.851300 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.852400 kubelet[2724]: E0711 00:13:29.851306 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.852400 kubelet[2724]: E0711 00:13:29.851401 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.852400 kubelet[2724]: W0711 00:13:29.851405 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.852400 kubelet[2724]: E0711 00:13:29.851410 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.852400 kubelet[2724]: E0711 00:13:29.851492 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.852638 kubelet[2724]: W0711 00:13:29.851498 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.852638 kubelet[2724]: E0711 00:13:29.851506 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.852638 kubelet[2724]: E0711 00:13:29.851588 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.852638 kubelet[2724]: W0711 00:13:29.851594 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.852638 kubelet[2724]: E0711 00:13:29.851601 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.852638 kubelet[2724]: E0711 00:13:29.851689 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.852638 kubelet[2724]: W0711 00:13:29.851694 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.852638 kubelet[2724]: E0711 00:13:29.851699 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.852638 kubelet[2724]: E0711 00:13:29.851784 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.852638 kubelet[2724]: W0711 00:13:29.851789 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.853731 kubelet[2724]: E0711 00:13:29.851794 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.853731 kubelet[2724]: E0711 00:13:29.851873 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.853731 kubelet[2724]: W0711 00:13:29.851877 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.853731 kubelet[2724]: E0711 00:13:29.851884 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.853731 kubelet[2724]: E0711 00:13:29.851962 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.853731 kubelet[2724]: W0711 00:13:29.851968 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.853731 kubelet[2724]: E0711 00:13:29.851973 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.853731 kubelet[2724]: E0711 00:13:29.852071 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.853731 kubelet[2724]: W0711 00:13:29.852078 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.853731 kubelet[2724]: E0711 00:13:29.852085 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.853995 kubelet[2724]: E0711 00:13:29.852176 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.853995 kubelet[2724]: W0711 00:13:29.852182 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.853995 kubelet[2724]: E0711 00:13:29.852187 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.853995 kubelet[2724]: E0711 00:13:29.852277 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.853995 kubelet[2724]: W0711 00:13:29.852282 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.853995 kubelet[2724]: E0711 00:13:29.852287 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.853995 kubelet[2724]: E0711 00:13:29.852368 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.853995 kubelet[2724]: W0711 00:13:29.852374 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.853995 kubelet[2724]: E0711 00:13:29.852381 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.853995 kubelet[2724]: E0711 00:13:29.852462 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.854250 kubelet[2724]: W0711 00:13:29.852467 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.854250 kubelet[2724]: E0711 00:13:29.852472 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.855211 systemd[1]: Started cri-containerd-33c19f2713d95bd527fb8f99bff21fdf6d8d75db91ed6b0f1b73c598fbb88ee1.scope - libcontainer container 33c19f2713d95bd527fb8f99bff21fdf6d8d75db91ed6b0f1b73c598fbb88ee1. Jul 11 00:13:29.878105 containerd[1532]: time="2025-07-11T00:13:29.877936050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-758df,Uid:5ae456d2-2fa2-41d5-851b-811ec77c085b,Namespace:calico-system,Attempt:0,} returns sandbox id \"33c19f2713d95bd527fb8f99bff21fdf6d8d75db91ed6b0f1b73c598fbb88ee1\"" Jul 11 00:13:29.939157 kubelet[2724]: E0711 00:13:29.939135 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.939157 kubelet[2724]: W0711 00:13:29.939151 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.939413 kubelet[2724]: E0711 00:13:29.939165 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.939413 kubelet[2724]: I0711 00:13:29.939186 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a987efe9-25c5-4a4f-8880-f0e8c56f315d-kubelet-dir\") pod \"csi-node-driver-p695n\" (UID: \"a987efe9-25c5-4a4f-8880-f0e8c56f315d\") " pod="calico-system/csi-node-driver-p695n" Jul 11 00:13:29.939699 kubelet[2724]: E0711 00:13:29.939311 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.939699 kubelet[2724]: W0711 00:13:29.939563 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.939699 kubelet[2724]: E0711 00:13:29.939572 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.939699 kubelet[2724]: I0711 00:13:29.939584 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a987efe9-25c5-4a4f-8880-f0e8c56f315d-registration-dir\") pod \"csi-node-driver-p695n\" (UID: \"a987efe9-25c5-4a4f-8880-f0e8c56f315d\") " pod="calico-system/csi-node-driver-p695n" Jul 11 00:13:29.940063 kubelet[2724]: E0711 00:13:29.939711 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.940063 kubelet[2724]: W0711 00:13:29.939717 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.940063 kubelet[2724]: E0711 00:13:29.939732 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.940063 kubelet[2724]: I0711 00:13:29.939742 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a987efe9-25c5-4a4f-8880-f0e8c56f315d-socket-dir\") pod \"csi-node-driver-p695n\" (UID: \"a987efe9-25c5-4a4f-8880-f0e8c56f315d\") " pod="calico-system/csi-node-driver-p695n" Jul 11 00:13:29.940063 kubelet[2724]: E0711 00:13:29.939875 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.940063 kubelet[2724]: W0711 00:13:29.940029 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.940063 kubelet[2724]: E0711 00:13:29.940041 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.940063 kubelet[2724]: I0711 00:13:29.940052 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a987efe9-25c5-4a4f-8880-f0e8c56f315d-varrun\") pod \"csi-node-driver-p695n\" (UID: \"a987efe9-25c5-4a4f-8880-f0e8c56f315d\") " pod="calico-system/csi-node-driver-p695n" Jul 11 00:13:29.940325 kubelet[2724]: E0711 00:13:29.940238 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.940325 kubelet[2724]: W0711 00:13:29.940246 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.940325 kubelet[2724]: E0711 00:13:29.940253 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.940325 kubelet[2724]: I0711 00:13:29.940263 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rxxb\" (UniqueName: \"kubernetes.io/projected/a987efe9-25c5-4a4f-8880-f0e8c56f315d-kube-api-access-7rxxb\") pod \"csi-node-driver-p695n\" (UID: \"a987efe9-25c5-4a4f-8880-f0e8c56f315d\") " pod="calico-system/csi-node-driver-p695n" Jul 11 00:13:29.940838 kubelet[2724]: E0711 00:13:29.940826 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.940869 kubelet[2724]: W0711 00:13:29.940839 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.940889 kubelet[2724]: E0711 00:13:29.940870 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.941105 kubelet[2724]: E0711 00:13:29.941094 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.941105 kubelet[2724]: W0711 00:13:29.941103 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.941825 kubelet[2724]: E0711 00:13:29.941162 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.942027 kubelet[2724]: E0711 00:13:29.941918 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.942062 kubelet[2724]: W0711 00:13:29.942027 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.942152 kubelet[2724]: E0711 00:13:29.942081 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.942207 kubelet[2724]: E0711 00:13:29.942191 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.942207 kubelet[2724]: W0711 00:13:29.942199 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.942325 kubelet[2724]: E0711 00:13:29.942249 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.942365 kubelet[2724]: E0711 00:13:29.942331 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.942365 kubelet[2724]: W0711 00:13:29.942337 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.942365 kubelet[2724]: E0711 00:13:29.942352 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.942543 kubelet[2724]: E0711 00:13:29.942533 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.942577 kubelet[2724]: W0711 00:13:29.942546 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.942661 kubelet[2724]: E0711 00:13:29.942599 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.942756 kubelet[2724]: E0711 00:13:29.942748 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.942756 kubelet[2724]: W0711 00:13:29.942755 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.942864 kubelet[2724]: E0711 00:13:29.942761 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.942889 kubelet[2724]: E0711 00:13:29.942879 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.942889 kubelet[2724]: W0711 00:13:29.942884 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.942936 kubelet[2724]: E0711 00:13:29.942889 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.943689 kubelet[2724]: E0711 00:13:29.943664 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.943689 kubelet[2724]: W0711 00:13:29.943673 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.943689 kubelet[2724]: E0711 00:13:29.943680 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:29.943824 kubelet[2724]: E0711 00:13:29.943798 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:29.943824 kubelet[2724]: W0711 00:13:29.943804 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:29.943824 kubelet[2724]: E0711 00:13:29.943816 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.042052 kubelet[2724]: E0711 00:13:30.041741 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.042052 kubelet[2724]: W0711 00:13:30.041758 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.042052 kubelet[2724]: E0711 00:13:30.041773 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.043584 kubelet[2724]: E0711 00:13:30.043574 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.044029 kubelet[2724]: W0711 00:13:30.043881 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.044029 kubelet[2724]: E0711 00:13:30.043903 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.044503 kubelet[2724]: E0711 00:13:30.044421 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.044668 kubelet[2724]: W0711 00:13:30.044546 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.045326 kubelet[2724]: E0711 00:13:30.045209 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.045326 kubelet[2724]: E0711 00:13:30.045269 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.045326 kubelet[2724]: W0711 00:13:30.045276 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.045526 kubelet[2724]: E0711 00:13:30.045483 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.045526 kubelet[2724]: W0711 00:13:30.045490 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.045643 kubelet[2724]: E0711 00:13:30.045636 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.045734 kubelet[2724]: W0711 00:13:30.045676 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.045734 kubelet[2724]: E0711 00:13:30.045685 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.045822 kubelet[2724]: E0711 00:13:30.045800 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.045822 kubelet[2724]: E0711 00:13:30.045812 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.046043 kubelet[2724]: E0711 00:13:30.045942 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.046043 kubelet[2724]: W0711 00:13:30.045949 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.046043 kubelet[2724]: E0711 00:13:30.045957 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.046192 kubelet[2724]: E0711 00:13:30.046185 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.046278 kubelet[2724]: W0711 00:13:30.046234 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.046278 kubelet[2724]: E0711 00:13:30.046249 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.046495 kubelet[2724]: E0711 00:13:30.046438 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.046495 kubelet[2724]: W0711 00:13:30.046445 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.046495 kubelet[2724]: E0711 00:13:30.046457 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.046739 kubelet[2724]: E0711 00:13:30.046658 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.046739 kubelet[2724]: W0711 00:13:30.046666 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.046739 kubelet[2724]: E0711 00:13:30.046675 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.046895 kubelet[2724]: E0711 00:13:30.046861 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.046895 kubelet[2724]: W0711 00:13:30.046868 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.047077 kubelet[2724]: E0711 00:13:30.046940 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.047246 kubelet[2724]: E0711 00:13:30.047200 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.047246 kubelet[2724]: W0711 00:13:30.047207 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.047401 kubelet[2724]: E0711 00:13:30.047360 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.047401 kubelet[2724]: W0711 00:13:30.047367 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.048535 kubelet[2724]: E0711 00:13:30.047878 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.048535 kubelet[2724]: E0711 00:13:30.047891 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.048695 kubelet[2724]: E0711 00:13:30.048624 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.048695 kubelet[2724]: W0711 00:13:30.048632 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.048695 kubelet[2724]: E0711 00:13:30.048682 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.048902 kubelet[2724]: E0711 00:13:30.048852 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.048902 kubelet[2724]: W0711 00:13:30.048866 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.048987 kubelet[2724]: E0711 00:13:30.048942 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.049351 kubelet[2724]: E0711 00:13:30.049336 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.049351 kubelet[2724]: W0711 00:13:30.049344 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.050134 kubelet[2724]: E0711 00:13:30.050052 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.050935 kubelet[2724]: E0711 00:13:30.050891 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.050935 kubelet[2724]: W0711 00:13:30.050907 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.051073 kubelet[2724]: E0711 00:13:30.050979 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.051902 kubelet[2724]: E0711 00:13:30.051874 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.052040 kubelet[2724]: W0711 00:13:30.051967 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.052972 kubelet[2724]: E0711 00:13:30.052965 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.053116 kubelet[2724]: W0711 00:13:30.053017 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.053116 kubelet[2724]: E0711 00:13:30.053058 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.053116 kubelet[2724]: E0711 00:13:30.053078 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.054978 kubelet[2724]: E0711 00:13:30.054850 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.054978 kubelet[2724]: W0711 00:13:30.054859 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.054978 kubelet[2724]: E0711 00:13:30.054966 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.055328 kubelet[2724]: E0711 00:13:30.054989 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.055328 kubelet[2724]: W0711 00:13:30.055105 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.055328 kubelet[2724]: E0711 00:13:30.055123 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.055328 kubelet[2724]: E0711 00:13:30.055224 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.055328 kubelet[2724]: W0711 00:13:30.055230 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.055328 kubelet[2724]: E0711 00:13:30.055248 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.056020 kubelet[2724]: E0711 00:13:30.055999 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.056061 kubelet[2724]: W0711 00:13:30.056015 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.056061 kubelet[2724]: E0711 00:13:30.056035 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.056611 kubelet[2724]: E0711 00:13:30.056598 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.056611 kubelet[2724]: W0711 00:13:30.056607 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.056753 kubelet[2724]: E0711 00:13:30.056616 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.057765 kubelet[2724]: E0711 00:13:30.057756 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.057765 kubelet[2724]: W0711 00:13:30.057763 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.057896 kubelet[2724]: E0711 00:13:30.057814 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:30.058369 kubelet[2724]: E0711 00:13:30.058357 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:30.058369 kubelet[2724]: W0711 00:13:30.058367 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:30.058436 kubelet[2724]: E0711 00:13:30.058374 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:31.245077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276570152.mount: Deactivated successfully. Jul 11 00:13:31.399403 kubelet[2724]: E0711 00:13:31.399376 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p695n" podUID="a987efe9-25c5-4a4f-8880-f0e8c56f315d" Jul 11 00:13:31.767534 containerd[1532]: time="2025-07-11T00:13:31.767501441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:31.768291 containerd[1532]: time="2025-07-11T00:13:31.768266457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 00:13:31.768995 containerd[1532]: time="2025-07-11T00:13:31.768977322Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:31.769955 containerd[1532]: time="2025-07-11T00:13:31.769930324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:31.770558 containerd[1532]: time="2025-07-11T00:13:31.770331428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.959349601s" Jul 11 00:13:31.770558 containerd[1532]: time="2025-07-11T00:13:31.770352693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 00:13:31.771077 containerd[1532]: time="2025-07-11T00:13:31.771028316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:13:31.783829 containerd[1532]: time="2025-07-11T00:13:31.782741971Z" level=info msg="CreateContainer within sandbox \"50569dd5778ff66b1e235da237a9558130add56bcee6e4e5f85d4f9a698bd778\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:13:31.793280 containerd[1532]: time="2025-07-11T00:13:31.793252363Z" level=info msg="CreateContainer within sandbox \"50569dd5778ff66b1e235da237a9558130add56bcee6e4e5f85d4f9a698bd778\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0f744ed7f8318d321029dd039af6065f17c7c1ebec3a2d4e0f6760317a6bccc7\"" Jul 11 00:13:31.793740 containerd[1532]: time="2025-07-11T00:13:31.793717704Z" level=info msg="StartContainer for \"0f744ed7f8318d321029dd039af6065f17c7c1ebec3a2d4e0f6760317a6bccc7\"" Jul 11 00:13:31.832271 systemd[1]: Started cri-containerd-0f744ed7f8318d321029dd039af6065f17c7c1ebec3a2d4e0f6760317a6bccc7.scope - libcontainer container 0f744ed7f8318d321029dd039af6065f17c7c1ebec3a2d4e0f6760317a6bccc7. Jul 11 00:13:31.867942 containerd[1532]: time="2025-07-11T00:13:31.867915823Z" level=info msg="StartContainer for \"0f744ed7f8318d321029dd039af6065f17c7c1ebec3a2d4e0f6760317a6bccc7\" returns successfully" Jul 11 00:13:32.532828 kubelet[2724]: I0711 00:13:32.532643 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84b4c74496-jp22j" podStartSLOduration=1.5523824880000001 podStartE2EDuration="3.532625899s" podCreationTimestamp="2025-07-11 00:13:29 +0000 UTC" firstStartedPulling="2025-07-11 00:13:29.790696048 +0000 UTC m=+16.588108928" lastFinishedPulling="2025-07-11 00:13:31.770939461 +0000 UTC m=+18.568352339" observedRunningTime="2025-07-11 00:13:32.532075662 +0000 UTC m=+19.329488556" watchObservedRunningTime="2025-07-11 00:13:32.532625899 +0000 UTC m=+19.330038794" Jul 11 00:13:32.570092 kubelet[2724]: E0711 00:13:32.569830 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.570092 kubelet[2724]: W0711 00:13:32.569844 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.570092 kubelet[2724]: E0711 00:13:32.569869 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.570092 kubelet[2724]: E0711 00:13:32.569980 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.570092 kubelet[2724]: W0711 00:13:32.569986 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.570092 kubelet[2724]: E0711 00:13:32.569991 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.571072 kubelet[2724]: E0711 00:13:32.570414 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.571072 kubelet[2724]: W0711 00:13:32.570420 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.571072 kubelet[2724]: E0711 00:13:32.570425 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.571072 kubelet[2724]: E0711 00:13:32.570690 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.571072 kubelet[2724]: W0711 00:13:32.570705 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.571072 kubelet[2724]: E0711 00:13:32.570712 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.571072 kubelet[2724]: E0711 00:13:32.570937 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.571072 kubelet[2724]: W0711 00:13:32.570943 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.571072 kubelet[2724]: E0711 00:13:32.570948 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.571615 kubelet[2724]: E0711 00:13:32.571232 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.571615 kubelet[2724]: W0711 00:13:32.571237 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.571615 kubelet[2724]: E0711 00:13:32.571243 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.571615 kubelet[2724]: E0711 00:13:32.571383 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.571615 kubelet[2724]: W0711 00:13:32.571396 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.571615 kubelet[2724]: E0711 00:13:32.571403 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.572169 kubelet[2724]: E0711 00:13:32.571916 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.572169 kubelet[2724]: W0711 00:13:32.571922 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.572169 kubelet[2724]: E0711 00:13:32.571928 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.572169 kubelet[2724]: E0711 00:13:32.572092 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.572169 kubelet[2724]: W0711 00:13:32.572104 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.572169 kubelet[2724]: E0711 00:13:32.572113 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.572734 kubelet[2724]: E0711 00:13:32.572225 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.572734 kubelet[2724]: W0711 00:13:32.572237 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.572734 kubelet[2724]: E0711 00:13:32.572244 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.572734 kubelet[2724]: E0711 00:13:32.572371 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.572734 kubelet[2724]: W0711 00:13:32.572375 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.572734 kubelet[2724]: E0711 00:13:32.572380 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.572734 kubelet[2724]: E0711 00:13:32.572544 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.572734 kubelet[2724]: W0711 00:13:32.572550 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.572734 kubelet[2724]: E0711 00:13:32.572555 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.573068 kubelet[2724]: E0711 00:13:32.572968 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.573068 kubelet[2724]: W0711 00:13:32.572975 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.573068 kubelet[2724]: E0711 00:13:32.572982 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.573319 kubelet[2724]: E0711 00:13:32.573128 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.573319 kubelet[2724]: W0711 00:13:32.573134 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.573319 kubelet[2724]: E0711 00:13:32.573142 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.573319 kubelet[2724]: E0711 00:13:32.573256 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.573319 kubelet[2724]: W0711 00:13:32.573262 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.573319 kubelet[2724]: E0711 00:13:32.573267 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.661306 kubelet[2724]: E0711 00:13:32.661221 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.661306 kubelet[2724]: W0711 00:13:32.661241 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.661306 kubelet[2724]: E0711 00:13:32.661260 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.661560 kubelet[2724]: E0711 00:13:32.661431 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.661560 kubelet[2724]: W0711 00:13:32.661438 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.661560 kubelet[2724]: E0711 00:13:32.661450 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.661867 kubelet[2724]: E0711 00:13:32.661767 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.661867 kubelet[2724]: W0711 00:13:32.661780 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.661867 kubelet[2724]: E0711 00:13:32.661797 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.662321 kubelet[2724]: E0711 00:13:32.661961 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.662321 kubelet[2724]: W0711 00:13:32.661970 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.662321 kubelet[2724]: E0711 00:13:32.661981 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.662321 kubelet[2724]: E0711 00:13:32.662151 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.662321 kubelet[2724]: W0711 00:13:32.662158 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.662321 kubelet[2724]: E0711 00:13:32.662165 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.662708 kubelet[2724]: E0711 00:13:32.662575 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.662708 kubelet[2724]: W0711 00:13:32.662586 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.662708 kubelet[2724]: E0711 00:13:32.662600 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.662827 kubelet[2724]: E0711 00:13:32.662811 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.662827 kubelet[2724]: W0711 00:13:32.662821 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.662879 kubelet[2724]: E0711 00:13:32.662834 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.663016 kubelet[2724]: E0711 00:13:32.662992 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.663016 kubelet[2724]: W0711 00:13:32.663002 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.663097 kubelet[2724]: E0711 00:13:32.663029 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.663170 kubelet[2724]: E0711 00:13:32.663154 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.663170 kubelet[2724]: W0711 00:13:32.663167 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.663248 kubelet[2724]: E0711 00:13:32.663181 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.663312 kubelet[2724]: E0711 00:13:32.663300 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.663345 kubelet[2724]: W0711 00:13:32.663311 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.663345 kubelet[2724]: E0711 00:13:32.663324 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.663480 kubelet[2724]: E0711 00:13:32.663467 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.663589 kubelet[2724]: W0711 00:13:32.663478 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.663589 kubelet[2724]: E0711 00:13:32.663491 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.663893 kubelet[2724]: E0711 00:13:32.663784 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.663893 kubelet[2724]: W0711 00:13:32.663795 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.663893 kubelet[2724]: E0711 00:13:32.663813 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.664141 kubelet[2724]: E0711 00:13:32.664046 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.664141 kubelet[2724]: W0711 00:13:32.664055 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.664141 kubelet[2724]: E0711 00:13:32.664069 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.664329 kubelet[2724]: E0711 00:13:32.664272 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.664329 kubelet[2724]: W0711 00:13:32.664281 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.664329 kubelet[2724]: E0711 00:13:32.664296 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.664592 kubelet[2724]: E0711 00:13:32.664554 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.664592 kubelet[2724]: W0711 00:13:32.664562 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.664592 kubelet[2724]: E0711 00:13:32.664576 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.664726 kubelet[2724]: E0711 00:13:32.664709 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.664726 kubelet[2724]: W0711 00:13:32.664724 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.664879 kubelet[2724]: E0711 00:13:32.664739 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.665144 kubelet[2724]: E0711 00:13:32.664947 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.665144 kubelet[2724]: W0711 00:13:32.664956 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.665144 kubelet[2724]: E0711 00:13:32.664963 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:32.665376 kubelet[2724]: E0711 00:13:32.665367 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:32.665425 kubelet[2724]: W0711 00:13:32.665417 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:32.665658 kubelet[2724]: E0711 00:13:32.665524 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.406656 kubelet[2724]: E0711 00:13:33.406551 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p695n" podUID="a987efe9-25c5-4a4f-8880-f0e8c56f315d" Jul 11 00:13:33.515039 containerd[1532]: time="2025-07-11T00:13:33.514467818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:33.528529 kubelet[2724]: I0711 00:13:33.528388 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:13:33.552298 containerd[1532]: time="2025-07-11T00:13:33.552259040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 00:13:33.560609 containerd[1532]: time="2025-07-11T00:13:33.560518699Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:33.580299 kubelet[2724]: E0711 00:13:33.580195 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.580299 kubelet[2724]: W0711 00:13:33.580209 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.580299 kubelet[2724]: E0711 00:13:33.580225 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.580673 kubelet[2724]: E0711 00:13:33.580482 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.580673 kubelet[2724]: W0711 00:13:33.580490 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.580673 kubelet[2724]: E0711 00:13:33.580496 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.580810 kubelet[2724]: E0711 00:13:33.580749 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.580810 kubelet[2724]: W0711 00:13:33.580756 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.580810 kubelet[2724]: E0711 00:13:33.580762 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.580910 kubelet[2724]: E0711 00:13:33.580902 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.580945 kubelet[2724]: W0711 00:13:33.580939 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.581041 kubelet[2724]: E0711 00:13:33.580973 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.581103 kubelet[2724]: E0711 00:13:33.581097 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.581141 kubelet[2724]: W0711 00:13:33.581135 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.581178 kubelet[2724]: E0711 00:13:33.581172 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.581353 kubelet[2724]: E0711 00:13:33.581303 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.581353 kubelet[2724]: W0711 00:13:33.581309 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.581353 kubelet[2724]: E0711 00:13:33.581314 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.581447 kubelet[2724]: E0711 00:13:33.581442 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.581524 kubelet[2724]: W0711 00:13:33.581475 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.581524 kubelet[2724]: E0711 00:13:33.581482 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.581613 kubelet[2724]: E0711 00:13:33.581606 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.581648 kubelet[2724]: W0711 00:13:33.581642 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.581680 kubelet[2724]: E0711 00:13:33.581675 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.581851 kubelet[2724]: E0711 00:13:33.581802 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.581851 kubelet[2724]: W0711 00:13:33.581808 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.581851 kubelet[2724]: E0711 00:13:33.581813 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.581951 kubelet[2724]: E0711 00:13:33.581946 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.582038 kubelet[2724]: W0711 00:13:33.581982 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.582038 kubelet[2724]: E0711 00:13:33.581990 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.582116 kubelet[2724]: E0711 00:13:33.582110 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.582148 kubelet[2724]: W0711 00:13:33.582143 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.582232 kubelet[2724]: E0711 00:13:33.582179 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.582348 kubelet[2724]: E0711 00:13:33.582289 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.582348 kubelet[2724]: W0711 00:13:33.582296 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.582348 kubelet[2724]: E0711 00:13:33.582301 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.582440 kubelet[2724]: E0711 00:13:33.582433 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.582521 kubelet[2724]: W0711 00:13:33.582472 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.582521 kubelet[2724]: E0711 00:13:33.582480 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.582598 kubelet[2724]: E0711 00:13:33.582592 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.582631 kubelet[2724]: W0711 00:13:33.582626 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.582704 kubelet[2724]: E0711 00:13:33.582660 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.582801 kubelet[2724]: E0711 00:13:33.582754 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.582801 kubelet[2724]: W0711 00:13:33.582760 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.582801 kubelet[2724]: E0711 00:13:33.582766 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.589408 containerd[1532]: time="2025-07-11T00:13:33.589374149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:33.589901 containerd[1532]: time="2025-07-11T00:13:33.589669545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.818622979s" Jul 11 00:13:33.589901 containerd[1532]: time="2025-07-11T00:13:33.589699017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 00:13:33.591522 containerd[1532]: time="2025-07-11T00:13:33.591508737Z" level=info msg="CreateContainer within sandbox \"33c19f2713d95bd527fb8f99bff21fdf6d8d75db91ed6b0f1b73c598fbb88ee1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:13:33.668275 kubelet[2724]: E0711 00:13:33.668167 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.668275 kubelet[2724]: W0711 00:13:33.668186 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.668275 kubelet[2724]: E0711 00:13:33.668205 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.668947 kubelet[2724]: E0711 00:13:33.668348 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.668947 kubelet[2724]: W0711 00:13:33.668354 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.668947 kubelet[2724]: E0711 00:13:33.668366 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.668947 kubelet[2724]: E0711 00:13:33.668483 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.668947 kubelet[2724]: W0711 00:13:33.668490 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.668947 kubelet[2724]: E0711 00:13:33.668497 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.668947 kubelet[2724]: E0711 00:13:33.668739 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.668947 kubelet[2724]: W0711 00:13:33.668747 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.668947 kubelet[2724]: E0711 00:13:33.668756 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.670248 kubelet[2724]: E0711 00:13:33.669466 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.670248 kubelet[2724]: W0711 00:13:33.669510 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.670248 kubelet[2724]: E0711 00:13:33.669527 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.670248 kubelet[2724]: E0711 00:13:33.669800 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.670248 kubelet[2724]: W0711 00:13:33.669810 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.670248 kubelet[2724]: E0711 00:13:33.669887 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.670248 kubelet[2724]: E0711 00:13:33.670098 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.670248 kubelet[2724]: W0711 00:13:33.670104 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.670248 kubelet[2724]: E0711 00:13:33.670180 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.670440 kubelet[2724]: E0711 00:13:33.670261 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.670440 kubelet[2724]: W0711 00:13:33.670267 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.670440 kubelet[2724]: E0711 00:13:33.670323 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.670440 kubelet[2724]: E0711 00:13:33.670410 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.670440 kubelet[2724]: W0711 00:13:33.670416 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.670440 kubelet[2724]: E0711 00:13:33.670424 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.670591 kubelet[2724]: E0711 00:13:33.670574 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.670591 kubelet[2724]: W0711 00:13:33.670586 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.670648 kubelet[2724]: E0711 00:13:33.670600 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.670817 kubelet[2724]: E0711 00:13:33.670800 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.670817 kubelet[2724]: W0711 00:13:33.670812 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.670875 kubelet[2724]: E0711 00:13:33.670820 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.670987 kubelet[2724]: E0711 00:13:33.670972 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.670987 kubelet[2724]: W0711 00:13:33.670983 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.671055 kubelet[2724]: E0711 00:13:33.670995 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.671147 kubelet[2724]: E0711 00:13:33.671133 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.671147 kubelet[2724]: W0711 00:13:33.671144 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.671197 kubelet[2724]: E0711 00:13:33.671152 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.671291 kubelet[2724]: E0711 00:13:33.671276 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.671291 kubelet[2724]: W0711 00:13:33.671287 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.671347 kubelet[2724]: E0711 00:13:33.671300 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.671490 kubelet[2724]: E0711 00:13:33.671477 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.671490 kubelet[2724]: W0711 00:13:33.671487 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.671544 kubelet[2724]: E0711 00:13:33.671501 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.672017 kubelet[2724]: E0711 00:13:33.671991 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.672053 kubelet[2724]: W0711 00:13:33.672027 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.672053 kubelet[2724]: E0711 00:13:33.672043 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.672289 kubelet[2724]: E0711 00:13:33.672273 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.672289 kubelet[2724]: W0711 00:13:33.672285 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.672352 kubelet[2724]: E0711 00:13:33.672296 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.672517 kubelet[2724]: E0711 00:13:33.672504 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:33.672555 kubelet[2724]: W0711 00:13:33.672516 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:33.672555 kubelet[2724]: E0711 00:13:33.672526 2724 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:33.693579 containerd[1532]: time="2025-07-11T00:13:33.693540678Z" level=info msg="CreateContainer within sandbox \"33c19f2713d95bd527fb8f99bff21fdf6d8d75db91ed6b0f1b73c598fbb88ee1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05\"" Jul 11 00:13:33.695290 containerd[1532]: time="2025-07-11T00:13:33.694289730Z" level=info msg="StartContainer for \"c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05\"" Jul 11 00:13:33.725629 systemd[1]: run-containerd-runc-k8s.io-c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05-runc.A4LRkS.mount: Deactivated successfully. Jul 11 00:13:33.732140 systemd[1]: Started cri-containerd-c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05.scope - libcontainer container c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05. Jul 11 00:13:33.752320 containerd[1532]: time="2025-07-11T00:13:33.752212627Z" level=info msg="StartContainer for \"c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05\" returns successfully" Jul 11 00:13:33.760458 systemd[1]: cri-containerd-c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05.scope: Deactivated successfully. Jul 11 00:13:33.776183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05-rootfs.mount: Deactivated successfully. Jul 11 00:13:33.851318 containerd[1532]: time="2025-07-11T00:13:33.816566589Z" level=info msg="shim disconnected" id=c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05 namespace=k8s.io Jul 11 00:13:33.851507 containerd[1532]: time="2025-07-11T00:13:33.851317777Z" level=warning msg="cleaning up after shim disconnected" id=c18b45f15cccb115cde3042f237d06f88a25c299edf213be0ba5b6a46de6da05 namespace=k8s.io Jul 11 00:13:33.851507 containerd[1532]: time="2025-07-11T00:13:33.851340388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:13:34.531555 containerd[1532]: time="2025-07-11T00:13:34.531274133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:13:35.399937 kubelet[2724]: E0711 00:13:35.399569 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p695n" podUID="a987efe9-25c5-4a4f-8880-f0e8c56f315d" Jul 11 00:13:36.425483 kubelet[2724]: I0711 00:13:36.425457 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:13:37.177144 containerd[1532]: time="2025-07-11T00:13:37.177073854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:37.178281 containerd[1532]: time="2025-07-11T00:13:37.177631107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 00:13:37.178281 containerd[1532]: time="2025-07-11T00:13:37.177999521Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:37.179912 containerd[1532]: time="2025-07-11T00:13:37.179877420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:37.180601 containerd[1532]: time="2025-07-11T00:13:37.180483617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.649182437s" Jul 11 00:13:37.180601 containerd[1532]: time="2025-07-11T00:13:37.180505920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 00:13:37.183028 containerd[1532]: time="2025-07-11T00:13:37.182896499Z" level=info msg="CreateContainer within sandbox \"33c19f2713d95bd527fb8f99bff21fdf6d8d75db91ed6b0f1b73c598fbb88ee1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:13:37.199764 containerd[1532]: time="2025-07-11T00:13:37.199731108Z" level=info msg="CreateContainer within sandbox \"33c19f2713d95bd527fb8f99bff21fdf6d8d75db91ed6b0f1b73c598fbb88ee1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2bfaf82f1fed8907d783010dec7983cc40f6170dc7aa3f44f018d3f8cac1986d\"" Jul 11 00:13:37.200264 containerd[1532]: time="2025-07-11T00:13:37.200143369Z" level=info msg="StartContainer for \"2bfaf82f1fed8907d783010dec7983cc40f6170dc7aa3f44f018d3f8cac1986d\"" Jul 11 00:13:37.228145 systemd[1]: Started cri-containerd-2bfaf82f1fed8907d783010dec7983cc40f6170dc7aa3f44f018d3f8cac1986d.scope - libcontainer container 2bfaf82f1fed8907d783010dec7983cc40f6170dc7aa3f44f018d3f8cac1986d. Jul 11 00:13:37.246430 containerd[1532]: time="2025-07-11T00:13:37.246400947Z" level=info msg="StartContainer for \"2bfaf82f1fed8907d783010dec7983cc40f6170dc7aa3f44f018d3f8cac1986d\" returns successfully" Jul 11 00:13:37.399536 kubelet[2724]: E0711 00:13:37.399305 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p695n" podUID="a987efe9-25c5-4a4f-8880-f0e8c56f315d" Jul 11 00:13:38.685744 systemd[1]: cri-containerd-2bfaf82f1fed8907d783010dec7983cc40f6170dc7aa3f44f018d3f8cac1986d.scope: Deactivated successfully. Jul 11 00:13:38.714884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bfaf82f1fed8907d783010dec7983cc40f6170dc7aa3f44f018d3f8cac1986d-rootfs.mount: Deactivated successfully. Jul 11 00:13:38.717941 containerd[1532]: time="2025-07-11T00:13:38.717887700Z" level=info msg="shim disconnected" id=2bfaf82f1fed8907d783010dec7983cc40f6170dc7aa3f44f018d3f8cac1986d namespace=k8s.io Jul 11 00:13:38.717941 containerd[1532]: time="2025-07-11T00:13:38.717923077Z" level=warning msg="cleaning up after shim disconnected" id=2bfaf82f1fed8907d783010dec7983cc40f6170dc7aa3f44f018d3f8cac1986d namespace=k8s.io Jul 11 00:13:38.717941 containerd[1532]: time="2025-07-11T00:13:38.717928760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:13:38.728519 kubelet[2724]: I0711 00:13:38.721655 2724 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:13:38.857228 systemd[1]: Created slice kubepods-besteffort-pod7e0575ec_5d0d_46a3_9f3a_19d2440f8d60.slice - libcontainer container kubepods-besteffort-pod7e0575ec_5d0d_46a3_9f3a_19d2440f8d60.slice. Jul 11 00:13:38.864997 systemd[1]: Created slice kubepods-burstable-podc569219a_53af_4571_883f_9b7bfe060437.slice - libcontainer container kubepods-burstable-podc569219a_53af_4571_883f_9b7bfe060437.slice. Jul 11 00:13:38.871800 systemd[1]: Created slice kubepods-besteffort-pod65805cee_bfb6_4749_bbfc_8e9405f90c70.slice - libcontainer container kubepods-besteffort-pod65805cee_bfb6_4749_bbfc_8e9405f90c70.slice. Jul 11 00:13:38.881117 systemd[1]: Created slice kubepods-besteffort-podf62dc6db_62cf_4983_8e32_b30eb8f76c1b.slice - libcontainer container kubepods-besteffort-podf62dc6db_62cf_4983_8e32_b30eb8f76c1b.slice. Jul 11 00:13:38.890702 systemd[1]: Created slice kubepods-burstable-pod3f974f17_07d9_43c5_843d_8f77256391bc.slice - libcontainer container kubepods-burstable-pod3f974f17_07d9_43c5_843d_8f77256391bc.slice. Jul 11 00:13:38.897897 systemd[1]: Created slice kubepods-besteffort-podf28665de_a757_4ccc_8a19_96a88f8187af.slice - libcontainer container kubepods-besteffort-podf28665de_a757_4ccc_8a19_96a88f8187af.slice. Jul 11 00:13:38.903598 systemd[1]: Created slice kubepods-besteffort-pod7cae885b_c99b_4b29_a6a7_210ea001e884.slice - libcontainer container kubepods-besteffort-pod7cae885b_c99b_4b29_a6a7_210ea001e884.slice. Jul 11 00:13:38.915464 kubelet[2724]: I0711 00:13:38.915442 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/65805cee-bfb6-4749-bbfc-8e9405f90c70-goldmane-key-pair\") pod \"goldmane-768f4c5c69-9c6d6\" (UID: \"65805cee-bfb6-4749-bbfc-8e9405f90c70\") " pod="calico-system/goldmane-768f4c5c69-9c6d6" Jul 11 00:13:38.917025 kubelet[2724]: I0711 00:13:38.916810 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scxrg\" (UniqueName: \"kubernetes.io/projected/65805cee-bfb6-4749-bbfc-8e9405f90c70-kube-api-access-scxrg\") pod \"goldmane-768f4c5c69-9c6d6\" (UID: \"65805cee-bfb6-4749-bbfc-8e9405f90c70\") " pod="calico-system/goldmane-768f4c5c69-9c6d6" Jul 11 00:13:38.917071 kubelet[2724]: I0711 00:13:38.917027 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdb2\" (UniqueName: \"kubernetes.io/projected/7e0575ec-5d0d-46a3-9f3a-19d2440f8d60-kube-api-access-lpdb2\") pod \"calico-kube-controllers-b7988bb64-5mrdt\" (UID: \"7e0575ec-5d0d-46a3-9f3a-19d2440f8d60\") " pod="calico-system/calico-kube-controllers-b7988bb64-5mrdt" Jul 11 00:13:38.917071 kubelet[2724]: I0711 00:13:38.917048 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-whisker-ca-bundle\") pod \"whisker-647954949f-2pjvw\" (UID: \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\") " pod="calico-system/whisker-647954949f-2pjvw" Jul 11 00:13:38.917071 kubelet[2724]: I0711 00:13:38.917060 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbznn\" (UniqueName: \"kubernetes.io/projected/3f974f17-07d9-43c5-843d-8f77256391bc-kube-api-access-cbznn\") pod \"coredns-668d6bf9bc-sx2g9\" (UID: \"3f974f17-07d9-43c5-843d-8f77256391bc\") " pod="kube-system/coredns-668d6bf9bc-sx2g9" Jul 11 00:13:38.917071 kubelet[2724]: I0711 00:13:38.917070 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e0575ec-5d0d-46a3-9f3a-19d2440f8d60-tigera-ca-bundle\") pod \"calico-kube-controllers-b7988bb64-5mrdt\" (UID: \"7e0575ec-5d0d-46a3-9f3a-19d2440f8d60\") " pod="calico-system/calico-kube-controllers-b7988bb64-5mrdt" Jul 11 00:13:38.917173 kubelet[2724]: I0711 00:13:38.917080 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6p4f\" (UniqueName: \"kubernetes.io/projected/7cae885b-c99b-4b29-a6a7-210ea001e884-kube-api-access-t6p4f\") pod \"calico-apiserver-dc67c5569-2wqmp\" (UID: \"7cae885b-c99b-4b29-a6a7-210ea001e884\") " pod="calico-apiserver/calico-apiserver-dc67c5569-2wqmp" Jul 11 00:13:38.917173 kubelet[2724]: I0711 00:13:38.917092 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-whisker-backend-key-pair\") pod \"whisker-647954949f-2pjvw\" (UID: \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\") " pod="calico-system/whisker-647954949f-2pjvw" Jul 11 00:13:38.917173 kubelet[2724]: I0711 00:13:38.917102 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmlzz\" (UniqueName: \"kubernetes.io/projected/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-kube-api-access-jmlzz\") pod \"whisker-647954949f-2pjvw\" (UID: \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\") " pod="calico-system/whisker-647954949f-2pjvw" Jul 11 00:13:38.917173 kubelet[2724]: I0711 00:13:38.917112 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cae885b-c99b-4b29-a6a7-210ea001e884-calico-apiserver-certs\") pod \"calico-apiserver-dc67c5569-2wqmp\" (UID: \"7cae885b-c99b-4b29-a6a7-210ea001e884\") " pod="calico-apiserver/calico-apiserver-dc67c5569-2wqmp" Jul 11 00:13:38.917173 kubelet[2724]: I0711 00:13:38.917123 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f974f17-07d9-43c5-843d-8f77256391bc-config-volume\") pod \"coredns-668d6bf9bc-sx2g9\" (UID: \"3f974f17-07d9-43c5-843d-8f77256391bc\") " pod="kube-system/coredns-668d6bf9bc-sx2g9" Jul 11 00:13:38.917638 kubelet[2724]: I0711 00:13:38.917133 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4zx\" (UniqueName: \"kubernetes.io/projected/c569219a-53af-4571-883f-9b7bfe060437-kube-api-access-2f4zx\") pod \"coredns-668d6bf9bc-m6474\" (UID: \"c569219a-53af-4571-883f-9b7bfe060437\") " pod="kube-system/coredns-668d6bf9bc-m6474" Jul 11 00:13:38.917638 kubelet[2724]: I0711 00:13:38.917142 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c569219a-53af-4571-883f-9b7bfe060437-config-volume\") pod \"coredns-668d6bf9bc-m6474\" (UID: \"c569219a-53af-4571-883f-9b7bfe060437\") " pod="kube-system/coredns-668d6bf9bc-m6474" Jul 11 00:13:38.917638 kubelet[2724]: I0711 00:13:38.917155 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65805cee-bfb6-4749-bbfc-8e9405f90c70-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-9c6d6\" (UID: \"65805cee-bfb6-4749-bbfc-8e9405f90c70\") " pod="calico-system/goldmane-768f4c5c69-9c6d6" Jul 11 00:13:38.917638 kubelet[2724]: I0711 00:13:38.917167 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f28665de-a757-4ccc-8a19-96a88f8187af-calico-apiserver-certs\") pod \"calico-apiserver-dc67c5569-crf45\" (UID: \"f28665de-a757-4ccc-8a19-96a88f8187af\") " pod="calico-apiserver/calico-apiserver-dc67c5569-crf45" Jul 11 00:13:38.917638 kubelet[2724]: I0711 00:13:38.917178 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65805cee-bfb6-4749-bbfc-8e9405f90c70-config\") pod \"goldmane-768f4c5c69-9c6d6\" (UID: \"65805cee-bfb6-4749-bbfc-8e9405f90c70\") " pod="calico-system/goldmane-768f4c5c69-9c6d6" Jul 11 00:13:38.917761 kubelet[2724]: I0711 00:13:38.917192 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9jh\" (UniqueName: \"kubernetes.io/projected/f28665de-a757-4ccc-8a19-96a88f8187af-kube-api-access-dc9jh\") pod \"calico-apiserver-dc67c5569-crf45\" (UID: \"f28665de-a757-4ccc-8a19-96a88f8187af\") " pod="calico-apiserver/calico-apiserver-dc67c5569-crf45" Jul 11 00:13:39.176285 containerd[1532]: time="2025-07-11T00:13:39.176201636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7988bb64-5mrdt,Uid:7e0575ec-5d0d-46a3-9f3a-19d2440f8d60,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:39.179402 containerd[1532]: time="2025-07-11T00:13:39.179145924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6474,Uid:c569219a-53af-4571-883f-9b7bfe060437,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:39.180943 containerd[1532]: time="2025-07-11T00:13:39.180919032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9c6d6,Uid:65805cee-bfb6-4749-bbfc-8e9405f90c70,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:39.188836 containerd[1532]: time="2025-07-11T00:13:39.188581164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647954949f-2pjvw,Uid:f62dc6db-62cf-4983-8e32-b30eb8f76c1b,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:39.196633 containerd[1532]: time="2025-07-11T00:13:39.196611089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sx2g9,Uid:3f974f17-07d9-43c5-843d-8f77256391bc,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:39.205647 containerd[1532]: time="2025-07-11T00:13:39.205622107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc67c5569-crf45,Uid:f28665de-a757-4ccc-8a19-96a88f8187af,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:13:39.211179 containerd[1532]: time="2025-07-11T00:13:39.211076243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc67c5569-2wqmp,Uid:7cae885b-c99b-4b29-a6a7-210ea001e884,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:13:39.405616 systemd[1]: Created slice kubepods-besteffort-poda987efe9_25c5_4a4f_8880_f0e8c56f315d.slice - libcontainer container kubepods-besteffort-poda987efe9_25c5_4a4f_8880_f0e8c56f315d.slice. Jul 11 00:13:39.407576 containerd[1532]: time="2025-07-11T00:13:39.407551928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p695n,Uid:a987efe9-25c5-4a4f-8880-f0e8c56f315d,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:39.526857 containerd[1532]: time="2025-07-11T00:13:39.526391710Z" level=error msg="Failed to destroy network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.527507 containerd[1532]: time="2025-07-11T00:13:39.526957108Z" level=error msg="Failed to destroy network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.528128 containerd[1532]: time="2025-07-11T00:13:39.528111384Z" level=error msg="encountered an error cleaning up failed sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.528198 containerd[1532]: time="2025-07-11T00:13:39.528184959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sx2g9,Uid:3f974f17-07d9-43c5-843d-8f77256391bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.528637 containerd[1532]: time="2025-07-11T00:13:39.528616129Z" level=error msg="encountered an error cleaning up failed sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.528668 containerd[1532]: time="2025-07-11T00:13:39.528645362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc67c5569-crf45,Uid:f28665de-a757-4ccc-8a19-96a88f8187af,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.533191 containerd[1532]: time="2025-07-11T00:13:39.533174576Z" level=error msg="Failed to destroy network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.533561 containerd[1532]: time="2025-07-11T00:13:39.533547715Z" level=error msg="encountered an error cleaning up failed sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.533702 containerd[1532]: time="2025-07-11T00:13:39.533629601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6474,Uid:c569219a-53af-4571-883f-9b7bfe060437,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.533754 containerd[1532]: time="2025-07-11T00:13:39.533742256Z" level=error msg="Failed to destroy network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.533986 containerd[1532]: time="2025-07-11T00:13:39.533973411Z" level=error msg="encountered an error cleaning up failed sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.534073 containerd[1532]: time="2025-07-11T00:13:39.534060245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7988bb64-5mrdt,Uid:7e0575ec-5d0d-46a3-9f3a-19d2440f8d60,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.534235 containerd[1532]: time="2025-07-11T00:13:39.534166393Z" level=error msg="Failed to destroy network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.534452 containerd[1532]: time="2025-07-11T00:13:39.534439089Z" level=error msg="encountered an error cleaning up failed sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.534526 containerd[1532]: time="2025-07-11T00:13:39.534513049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9c6d6,Uid:65805cee-bfb6-4749-bbfc-8e9405f90c70,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.534633 containerd[1532]: time="2025-07-11T00:13:39.534620164Z" level=error msg="Failed to destroy network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.534891 containerd[1532]: time="2025-07-11T00:13:39.534841557Z" level=error msg="encountered an error cleaning up failed sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.534931 containerd[1532]: time="2025-07-11T00:13:39.534866849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc67c5569-2wqmp,Uid:7cae885b-c99b-4b29-a6a7-210ea001e884,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.535932 kubelet[2724]: E0711 00:13:39.535819 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.537344 kubelet[2724]: E0711 00:13:39.537327 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.538417 kubelet[2724]: E0711 00:13:39.538069 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc67c5569-crf45" Jul 11 00:13:39.538417 kubelet[2724]: E0711 00:13:39.538349 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc67c5569-crf45" Jul 11 00:13:39.538483 containerd[1532]: time="2025-07-11T00:13:39.538090906Z" level=error msg="Failed to destroy network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.538483 containerd[1532]: time="2025-07-11T00:13:39.538268352Z" level=error msg="encountered an error cleaning up failed sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.538483 containerd[1532]: time="2025-07-11T00:13:39.538292258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647954949f-2pjvw,Uid:f62dc6db-62cf-4983-8e32-b30eb8f76c1b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.539116 kubelet[2724]: E0711 00:13:39.538402 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dc67c5569-crf45_calico-apiserver(f28665de-a757-4ccc-8a19-96a88f8187af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dc67c5569-crf45_calico-apiserver(f28665de-a757-4ccc-8a19-96a88f8187af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dc67c5569-crf45" podUID="f28665de-a757-4ccc-8a19-96a88f8187af" Jul 11 00:13:39.543901 kubelet[2724]: E0711 00:13:39.537891 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sx2g9" Jul 11 00:13:39.543901 kubelet[2724]: E0711 00:13:39.543465 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sx2g9" Jul 11 00:13:39.543901 kubelet[2724]: E0711 00:13:39.543498 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sx2g9_kube-system(3f974f17-07d9-43c5-843d-8f77256391bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sx2g9_kube-system(3f974f17-07d9-43c5-843d-8f77256391bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sx2g9" podUID="3f974f17-07d9-43c5-843d-8f77256391bc" Jul 11 00:13:39.544048 kubelet[2724]: E0711 00:13:39.543861 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.544048 kubelet[2724]: E0711 00:13:39.543880 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m6474" Jul 11 00:13:39.544736 kubelet[2724]: E0711 00:13:39.544455 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m6474" Jul 11 00:13:39.544736 kubelet[2724]: E0711 00:13:39.544483 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m6474_kube-system(c569219a-53af-4571-883f-9b7bfe060437)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m6474_kube-system(c569219a-53af-4571-883f-9b7bfe060437)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m6474" podUID="c569219a-53af-4571-883f-9b7bfe060437" Jul 11 00:13:39.544736 kubelet[2724]: E0711 00:13:39.544515 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.544825 kubelet[2724]: E0711 00:13:39.544536 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b7988bb64-5mrdt" Jul 11 00:13:39.544825 kubelet[2724]: E0711 00:13:39.544546 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b7988bb64-5mrdt" Jul 11 00:13:39.544825 kubelet[2724]: E0711 00:13:39.544561 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b7988bb64-5mrdt_calico-system(7e0575ec-5d0d-46a3-9f3a-19d2440f8d60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b7988bb64-5mrdt_calico-system(7e0575ec-5d0d-46a3-9f3a-19d2440f8d60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b7988bb64-5mrdt" podUID="7e0575ec-5d0d-46a3-9f3a-19d2440f8d60" Jul 11 00:13:39.544896 kubelet[2724]: E0711 00:13:39.544577 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.544896 kubelet[2724]: E0711 00:13:39.544587 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9c6d6" Jul 11 00:13:39.544896 kubelet[2724]: E0711 00:13:39.544596 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9c6d6" Jul 11 00:13:39.544991 kubelet[2724]: E0711 00:13:39.544611 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-9c6d6_calico-system(65805cee-bfb6-4749-bbfc-8e9405f90c70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-9c6d6_calico-system(65805cee-bfb6-4749-bbfc-8e9405f90c70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-9c6d6" podUID="65805cee-bfb6-4749-bbfc-8e9405f90c70" Jul 11 00:13:39.544991 kubelet[2724]: E0711 00:13:39.544625 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.544991 kubelet[2724]: E0711 00:13:39.544634 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc67c5569-2wqmp" Jul 11 00:13:39.545078 kubelet[2724]: E0711 00:13:39.544641 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc67c5569-2wqmp" Jul 11 00:13:39.545078 kubelet[2724]: E0711 00:13:39.544653 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dc67c5569-2wqmp_calico-apiserver(7cae885b-c99b-4b29-a6a7-210ea001e884)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dc67c5569-2wqmp_calico-apiserver(7cae885b-c99b-4b29-a6a7-210ea001e884)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dc67c5569-2wqmp" podUID="7cae885b-c99b-4b29-a6a7-210ea001e884" Jul 11 00:13:39.545078 kubelet[2724]: E0711 00:13:39.544668 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.545152 kubelet[2724]: E0711 00:13:39.544678 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-647954949f-2pjvw" Jul 11 00:13:39.545152 kubelet[2724]: E0711 00:13:39.544686 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-647954949f-2pjvw" Jul 11 00:13:39.545152 kubelet[2724]: E0711 00:13:39.544706 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-647954949f-2pjvw_calico-system(f62dc6db-62cf-4983-8e32-b30eb8f76c1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-647954949f-2pjvw_calico-system(f62dc6db-62cf-4983-8e32-b30eb8f76c1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-647954949f-2pjvw" podUID="f62dc6db-62cf-4983-8e32-b30eb8f76c1b" Jul 11 00:13:39.553256 containerd[1532]: time="2025-07-11T00:13:39.552768964Z" level=error msg="Failed to destroy network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.553256 containerd[1532]: time="2025-07-11T00:13:39.552971753Z" level=error msg="encountered an error cleaning up failed sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.553256 containerd[1532]: time="2025-07-11T00:13:39.553001738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p695n,Uid:a987efe9-25c5-4a4f-8880-f0e8c56f315d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.555832 kubelet[2724]: E0711 00:13:39.555516 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.555832 kubelet[2724]: E0711 00:13:39.555549 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p695n" Jul 11 00:13:39.555832 kubelet[2724]: E0711 00:13:39.555561 2724 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p695n" Jul 11 00:13:39.555940 kubelet[2724]: E0711 00:13:39.555585 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p695n_calico-system(a987efe9-25c5-4a4f-8880-f0e8c56f315d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p695n_calico-system(a987efe9-25c5-4a4f-8880-f0e8c56f315d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p695n" podUID="a987efe9-25c5-4a4f-8880-f0e8c56f315d" Jul 11 00:13:39.556432 containerd[1532]: time="2025-07-11T00:13:39.556142535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:13:39.563819 kubelet[2724]: I0711 00:13:39.563361 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:13:39.576253 kubelet[2724]: I0711 00:13:39.575838 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:13:39.602308 kubelet[2724]: I0711 00:13:39.602291 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:13:39.604261 kubelet[2724]: I0711 00:13:39.604231 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:13:39.605816 kubelet[2724]: I0711 00:13:39.605487 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:13:39.610285 containerd[1532]: time="2025-07-11T00:13:39.609412923Z" level=info msg="StopPodSandbox for \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\"" Jul 11 00:13:39.610285 containerd[1532]: time="2025-07-11T00:13:39.609932544Z" level=info msg="StopPodSandbox for \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\"" Jul 11 00:13:39.611029 containerd[1532]: time="2025-07-11T00:13:39.610491001Z" level=info msg="Ensure that sandbox 63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff in task-service has been cleanup successfully" Jul 11 00:13:39.611103 containerd[1532]: time="2025-07-11T00:13:39.611092114Z" level=info msg="Ensure that sandbox e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7 in task-service has been cleanup successfully" Jul 11 00:13:39.611507 containerd[1532]: time="2025-07-11T00:13:39.611492186Z" level=info msg="StopPodSandbox for \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\"" Jul 11 00:13:39.611595 containerd[1532]: time="2025-07-11T00:13:39.611572136Z" level=info msg="Ensure that sandbox 85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f in task-service has been cleanup successfully" Jul 11 00:13:39.612928 containerd[1532]: time="2025-07-11T00:13:39.612755090Z" level=info msg="StopPodSandbox for \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\"" Jul 11 00:13:39.612928 containerd[1532]: time="2025-07-11T00:13:39.612845854Z" level=info msg="Ensure that sandbox 69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9 in task-service has been cleanup successfully" Jul 11 00:13:39.613172 kubelet[2724]: I0711 00:13:39.612852 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:13:39.614347 containerd[1532]: time="2025-07-11T00:13:39.614329979Z" level=info msg="StopPodSandbox for \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\"" Jul 11 00:13:39.614429 containerd[1532]: time="2025-07-11T00:13:39.614415700Z" level=info msg="Ensure that sandbox 0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf in task-service has been cleanup successfully" Jul 11 00:13:39.615024 containerd[1532]: time="2025-07-11T00:13:39.614896734Z" level=info msg="StopPodSandbox for \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\"" Jul 11 00:13:39.615178 containerd[1532]: time="2025-07-11T00:13:39.615132529Z" level=info msg="Ensure that sandbox ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325 in task-service has been cleanup successfully" Jul 11 00:13:39.623979 kubelet[2724]: I0711 00:13:39.623963 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:13:39.627482 containerd[1532]: time="2025-07-11T00:13:39.627270810Z" level=info msg="StopPodSandbox for \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\"" Jul 11 00:13:39.627482 containerd[1532]: time="2025-07-11T00:13:39.627369798Z" level=info msg="Ensure that sandbox d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a in task-service has been cleanup successfully" Jul 11 00:13:39.662832 containerd[1532]: time="2025-07-11T00:13:39.662798373Z" level=error msg="StopPodSandbox for \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\" failed" error="failed to destroy network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.662994 kubelet[2724]: E0711 00:13:39.662951 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:13:39.670428 containerd[1532]: time="2025-07-11T00:13:39.670220698Z" level=error msg="StopPodSandbox for \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\" failed" error="failed to destroy network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.670518 kubelet[2724]: E0711 00:13:39.670337 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:13:39.678975 kubelet[2724]: E0711 00:13:39.662989 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff"} Jul 11 00:13:39.678975 kubelet[2724]: E0711 00:13:39.678878 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cae885b-c99b-4b29-a6a7-210ea001e884\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:39.678975 kubelet[2724]: E0711 00:13:39.678907 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cae885b-c99b-4b29-a6a7-210ea001e884\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dc67c5569-2wqmp" podUID="7cae885b-c99b-4b29-a6a7-210ea001e884" Jul 11 00:13:39.678975 kubelet[2724]: E0711 00:13:39.670362 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7"} Jul 11 00:13:39.679350 kubelet[2724]: E0711 00:13:39.678939 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f28665de-a757-4ccc-8a19-96a88f8187af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:39.679350 kubelet[2724]: E0711 00:13:39.678954 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f28665de-a757-4ccc-8a19-96a88f8187af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dc67c5569-crf45" podUID="f28665de-a757-4ccc-8a19-96a88f8187af" Jul 11 00:13:39.681235 containerd[1532]: time="2025-07-11T00:13:39.681163347Z" level=error msg="StopPodSandbox for \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\" failed" error="failed to destroy network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.681533 kubelet[2724]: E0711 00:13:39.681371 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:13:39.681533 kubelet[2724]: E0711 00:13:39.681396 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf"} Jul 11 00:13:39.681533 kubelet[2724]: E0711 00:13:39.681410 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:39.681533 kubelet[2724]: E0711 00:13:39.681421 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-647954949f-2pjvw" podUID="f62dc6db-62cf-4983-8e32-b30eb8f76c1b" Jul 11 00:13:39.684086 containerd[1532]: time="2025-07-11T00:13:39.683915884Z" level=error msg="StopPodSandbox for \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\" failed" error="failed to destroy network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.684124 kubelet[2724]: E0711 00:13:39.684106 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:13:39.684151 kubelet[2724]: E0711 00:13:39.684129 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9"} Jul 11 00:13:39.684151 kubelet[2724]: E0711 00:13:39.684144 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f974f17-07d9-43c5-843d-8f77256391bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:39.684206 kubelet[2724]: E0711 00:13:39.684155 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f974f17-07d9-43c5-843d-8f77256391bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sx2g9" podUID="3f974f17-07d9-43c5-843d-8f77256391bc" Jul 11 00:13:39.686280 containerd[1532]: time="2025-07-11T00:13:39.686117467Z" level=error msg="StopPodSandbox for \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\" failed" error="failed to destroy network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.686327 kubelet[2724]: E0711 00:13:39.686202 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:13:39.686327 kubelet[2724]: E0711 00:13:39.686219 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a"} Jul 11 00:13:39.686327 kubelet[2724]: E0711 00:13:39.686234 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c569219a-53af-4571-883f-9b7bfe060437\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:39.686327 kubelet[2724]: E0711 00:13:39.686245 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c569219a-53af-4571-883f-9b7bfe060437\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m6474" podUID="c569219a-53af-4571-883f-9b7bfe060437" Jul 11 00:13:39.687166 containerd[1532]: time="2025-07-11T00:13:39.687041038Z" level=error msg="StopPodSandbox for \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\" failed" error="failed to destroy network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.687202 kubelet[2724]: E0711 00:13:39.687108 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:13:39.687202 kubelet[2724]: E0711 00:13:39.687126 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f"} Jul 11 00:13:39.687202 kubelet[2724]: E0711 00:13:39.687140 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65805cee-bfb6-4749-bbfc-8e9405f90c70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:39.687202 kubelet[2724]: E0711 00:13:39.687150 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65805cee-bfb6-4749-bbfc-8e9405f90c70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-9c6d6" podUID="65805cee-bfb6-4749-bbfc-8e9405f90c70" Jul 11 00:13:39.688074 containerd[1532]: time="2025-07-11T00:13:39.688029146Z" level=error msg="StopPodSandbox for \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\" failed" error="failed to destroy network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:39.688135 kubelet[2724]: E0711 00:13:39.688110 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:13:39.688135 kubelet[2724]: E0711 00:13:39.688126 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325"} Jul 11 00:13:39.688187 kubelet[2724]: E0711 00:13:39.688140 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e0575ec-5d0d-46a3-9f3a-19d2440f8d60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:39.688187 kubelet[2724]: E0711 00:13:39.688151 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e0575ec-5d0d-46a3-9f3a-19d2440f8d60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b7988bb64-5mrdt" podUID="7e0575ec-5d0d-46a3-9f3a-19d2440f8d60" Jul 11 00:13:39.717476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a-shm.mount: Deactivated successfully. Jul 11 00:13:40.625896 kubelet[2724]: I0711 00:13:40.625876 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:13:40.626264 containerd[1532]: time="2025-07-11T00:13:40.626243898Z" level=info msg="StopPodSandbox for \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\"" Jul 11 00:13:40.626381 containerd[1532]: time="2025-07-11T00:13:40.626341406Z" level=info msg="Ensure that sandbox 3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e in task-service has been cleanup successfully" Jul 11 00:13:40.641438 containerd[1532]: time="2025-07-11T00:13:40.641406559Z" level=error msg="StopPodSandbox for \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\" failed" error="failed to destroy network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:40.641813 kubelet[2724]: E0711 00:13:40.641551 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:13:40.641813 kubelet[2724]: E0711 00:13:40.641582 2724 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e"} Jul 11 00:13:40.641813 kubelet[2724]: E0711 00:13:40.641605 2724 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a987efe9-25c5-4a4f-8880-f0e8c56f315d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:40.641813 kubelet[2724]: E0711 00:13:40.641618 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a987efe9-25c5-4a4f-8880-f0e8c56f315d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p695n" podUID="a987efe9-25c5-4a4f-8880-f0e8c56f315d" Jul 11 00:13:44.790866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050203015.mount: Deactivated successfully. Jul 11 00:13:44.862419 containerd[1532]: time="2025-07-11T00:13:44.852629720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 00:13:44.862682 containerd[1532]: time="2025-07-11T00:13:44.862445373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:44.888277 containerd[1532]: time="2025-07-11T00:13:44.888232015Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:44.889283 containerd[1532]: time="2025-07-11T00:13:44.888951545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:44.889810 containerd[1532]: time="2025-07-11T00:13:44.889787839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.332475083s" Jul 11 00:13:44.889842 containerd[1532]: time="2025-07-11T00:13:44.889812112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 00:13:44.947745 containerd[1532]: time="2025-07-11T00:13:44.947722975Z" level=info msg="CreateContainer within sandbox \"33c19f2713d95bd527fb8f99bff21fdf6d8d75db91ed6b0f1b73c598fbb88ee1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:13:44.973513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571323323.mount: Deactivated successfully. Jul 11 00:13:44.980509 containerd[1532]: time="2025-07-11T00:13:44.980482850Z" level=info msg="CreateContainer within sandbox \"33c19f2713d95bd527fb8f99bff21fdf6d8d75db91ed6b0f1b73c598fbb88ee1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"112fb07508647333cbb5aaf6d61e44d018b833673b0e6a3e86f3a819fff8ff80\"" Jul 11 00:13:44.982049 containerd[1532]: time="2025-07-11T00:13:44.981020004Z" level=info msg="StartContainer for \"112fb07508647333cbb5aaf6d61e44d018b833673b0e6a3e86f3a819fff8ff80\"" Jul 11 00:13:45.075224 systemd[1]: Started cri-containerd-112fb07508647333cbb5aaf6d61e44d018b833673b0e6a3e86f3a819fff8ff80.scope - libcontainer container 112fb07508647333cbb5aaf6d61e44d018b833673b0e6a3e86f3a819fff8ff80. Jul 11 00:13:45.092870 containerd[1532]: time="2025-07-11T00:13:45.092833622Z" level=info msg="StartContainer for \"112fb07508647333cbb5aaf6d61e44d018b833673b0e6a3e86f3a819fff8ff80\" returns successfully" Jul 11 00:13:45.274266 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:13:45.277535 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:13:45.784072 kubelet[2724]: I0711 00:13:45.780201 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-758df" podStartSLOduration=1.763601169 podStartE2EDuration="16.764822801s" podCreationTimestamp="2025-07-11 00:13:29 +0000 UTC" firstStartedPulling="2025-07-11 00:13:29.888987578 +0000 UTC m=+16.686400458" lastFinishedPulling="2025-07-11 00:13:44.890209213 +0000 UTC m=+31.687622090" observedRunningTime="2025-07-11 00:13:45.649572752 +0000 UTC m=+32.446985638" watchObservedRunningTime="2025-07-11 00:13:45.764822801 +0000 UTC m=+32.562235682" Jul 11 00:13:45.786955 containerd[1532]: time="2025-07-11T00:13:45.786877294Z" level=info msg="StopPodSandbox for \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\"" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:45.863 [INFO][3976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:45.863 [INFO][3976] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" iface="eth0" netns="/var/run/netns/cni-2085f7ec-aa20-cc55-001b-d089f6436874" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:45.864 [INFO][3976] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" iface="eth0" netns="/var/run/netns/cni-2085f7ec-aa20-cc55-001b-d089f6436874" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:45.865 [INFO][3976] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" iface="eth0" netns="/var/run/netns/cni-2085f7ec-aa20-cc55-001b-d089f6436874" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:45.865 [INFO][3976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:45.865 [INFO][3976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:46.176 [INFO][3985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" HandleID="k8s-pod-network.0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Workload="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:46.178 [INFO][3985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:46.179 [INFO][3985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:46.189 [WARNING][3985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" HandleID="k8s-pod-network.0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Workload="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:46.189 [INFO][3985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" HandleID="k8s-pod-network.0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Workload="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:46.190 [INFO][3985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:46.193233 containerd[1532]: 2025-07-11 00:13:46.191 [INFO][3976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:13:46.196181 systemd[1]: run-netns-cni\x2d2085f7ec\x2daa20\x2dcc55\x2d001b\x2dd089f6436874.mount: Deactivated successfully. Jul 11 00:13:46.200306 containerd[1532]: time="2025-07-11T00:13:46.200281037Z" level=info msg="TearDown network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\" successfully" Jul 11 00:13:46.200306 containerd[1532]: time="2025-07-11T00:13:46.200305430Z" level=info msg="StopPodSandbox for \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\" returns successfully" Jul 11 00:13:46.334545 kubelet[2724]: I0711 00:13:46.334260 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmlzz\" (UniqueName: \"kubernetes.io/projected/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-kube-api-access-jmlzz\") pod \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\" (UID: \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\") " Jul 11 00:13:46.334545 kubelet[2724]: I0711 00:13:46.334323 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-whisker-ca-bundle\") pod \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\" (UID: \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\") " Jul 11 00:13:46.334545 kubelet[2724]: I0711 00:13:46.334341 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-whisker-backend-key-pair\") pod \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\" (UID: \"f62dc6db-62cf-4983-8e32-b30eb8f76c1b\") " Jul 11 00:13:46.359641 kubelet[2724]: I0711 00:13:46.358228 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f62dc6db-62cf-4983-8e32-b30eb8f76c1b" (UID: "f62dc6db-62cf-4983-8e32-b30eb8f76c1b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:13:46.368633 systemd[1]: var-lib-kubelet-pods-f62dc6db\x2d62cf\x2d4983\x2d8e32\x2db30eb8f76c1b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:13:46.368709 systemd[1]: var-lib-kubelet-pods-f62dc6db\x2d62cf\x2d4983\x2d8e32\x2db30eb8f76c1b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djmlzz.mount: Deactivated successfully. Jul 11 00:13:46.369394 kubelet[2724]: I0711 00:13:46.368807 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f62dc6db-62cf-4983-8e32-b30eb8f76c1b" (UID: "f62dc6db-62cf-4983-8e32-b30eb8f76c1b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:13:46.369681 kubelet[2724]: I0711 00:13:46.369559 2724 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-kube-api-access-jmlzz" (OuterVolumeSpecName: "kube-api-access-jmlzz") pod "f62dc6db-62cf-4983-8e32-b30eb8f76c1b" (UID: "f62dc6db-62cf-4983-8e32-b30eb8f76c1b"). InnerVolumeSpecName "kube-api-access-jmlzz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:13:46.437532 kubelet[2724]: I0711 00:13:46.437508 2724 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jmlzz\" (UniqueName: \"kubernetes.io/projected/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-kube-api-access-jmlzz\") on node \"localhost\" DevicePath \"\"" Jul 11 00:13:46.437532 kubelet[2724]: I0711 00:13:46.437532 2724 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:13:46.437532 kubelet[2724]: I0711 00:13:46.437538 2724 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f62dc6db-62cf-4983-8e32-b30eb8f76c1b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:13:46.676730 systemd[1]: Removed slice kubepods-besteffort-podf62dc6db_62cf_4983_8e32_b30eb8f76c1b.slice - libcontainer container kubepods-besteffort-podf62dc6db_62cf_4983_8e32_b30eb8f76c1b.slice. Jul 11 00:13:46.816327 systemd[1]: Created slice kubepods-besteffort-pod650e4f0d_82ff_49e9_b369_fe9f2fea9d28.slice - libcontainer container kubepods-besteffort-pod650e4f0d_82ff_49e9_b369_fe9f2fea9d28.slice. Jul 11 00:13:46.941201 kubelet[2724]: I0711 00:13:46.941124 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/650e4f0d-82ff-49e9-b369-fe9f2fea9d28-whisker-backend-key-pair\") pod \"whisker-5b45cff48f-9mtxg\" (UID: \"650e4f0d-82ff-49e9-b369-fe9f2fea9d28\") " pod="calico-system/whisker-5b45cff48f-9mtxg" Jul 11 00:13:46.941201 kubelet[2724]: I0711 00:13:46.941164 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zrf8\" (UniqueName: \"kubernetes.io/projected/650e4f0d-82ff-49e9-b369-fe9f2fea9d28-kube-api-access-8zrf8\") pod \"whisker-5b45cff48f-9mtxg\" (UID: \"650e4f0d-82ff-49e9-b369-fe9f2fea9d28\") " pod="calico-system/whisker-5b45cff48f-9mtxg" Jul 11 00:13:46.941201 kubelet[2724]: I0711 00:13:46.941181 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650e4f0d-82ff-49e9-b369-fe9f2fea9d28-whisker-ca-bundle\") pod \"whisker-5b45cff48f-9mtxg\" (UID: \"650e4f0d-82ff-49e9-b369-fe9f2fea9d28\") " pod="calico-system/whisker-5b45cff48f-9mtxg" Jul 11 00:13:47.118963 containerd[1532]: time="2025-07-11T00:13:47.118929199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b45cff48f-9mtxg,Uid:650e4f0d-82ff-49e9-b369-fe9f2fea9d28,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:47.422028 kernel: bpftool[4150]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:13:47.454773 kubelet[2724]: I0711 00:13:47.454749 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f62dc6db-62cf-4983-8e32-b30eb8f76c1b" path="/var/lib/kubelet/pods/f62dc6db-62cf-4983-8e32-b30eb8f76c1b/volumes" Jul 11 00:13:47.636358 systemd-networkd[1443]: cali9367e7d5b6e: Link UP Jul 11 00:13:47.638520 systemd-networkd[1443]: cali9367e7d5b6e: Gained carrier Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.212 [INFO][4123] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.225 [INFO][4123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b45cff48f--9mtxg-eth0 whisker-5b45cff48f- calico-system 650e4f0d-82ff-49e9-b369-fe9f2fea9d28 865 0 2025-07-11 00:13:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b45cff48f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b45cff48f-9mtxg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9367e7d5b6e [] [] }} ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Namespace="calico-system" Pod="whisker-5b45cff48f-9mtxg" WorkloadEndpoint="localhost-k8s-whisker--5b45cff48f--9mtxg-" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.225 [INFO][4123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Namespace="calico-system" Pod="whisker-5b45cff48f-9mtxg" WorkloadEndpoint="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.267 [INFO][4139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" HandleID="k8s-pod-network.9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Workload="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.267 [INFO][4139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" HandleID="k8s-pod-network.9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Workload="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b45cff48f-9mtxg", "timestamp":"2025-07-11 00:13:47.267422728 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.267 [INFO][4139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.267 [INFO][4139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.267 [INFO][4139] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.285 [INFO][4139] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" host="localhost" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.420 [INFO][4139] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.429 [INFO][4139] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.431 [INFO][4139] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.432 [INFO][4139] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.432 [INFO][4139] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" host="localhost" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.433 [INFO][4139] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59 Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.438 [INFO][4139] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" host="localhost" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.444 [INFO][4139] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" host="localhost" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.444 [INFO][4139] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" host="localhost" Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.444 [INFO][4139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:47.650289 containerd[1532]: 2025-07-11 00:13:47.444 [INFO][4139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" HandleID="k8s-pod-network.9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Workload="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" Jul 11 00:13:47.653221 containerd[1532]: 2025-07-11 00:13:47.446 [INFO][4123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Namespace="calico-system" Pod="whisker-5b45cff48f-9mtxg" WorkloadEndpoint="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b45cff48f--9mtxg-eth0", GenerateName:"whisker-5b45cff48f-", Namespace:"calico-system", SelfLink:"", UID:"650e4f0d-82ff-49e9-b369-fe9f2fea9d28", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b45cff48f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b45cff48f-9mtxg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9367e7d5b6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:47.653221 containerd[1532]: 2025-07-11 00:13:47.446 [INFO][4123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Namespace="calico-system" Pod="whisker-5b45cff48f-9mtxg" WorkloadEndpoint="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" Jul 11 00:13:47.653221 containerd[1532]: 2025-07-11 00:13:47.446 [INFO][4123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9367e7d5b6e ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Namespace="calico-system" Pod="whisker-5b45cff48f-9mtxg" WorkloadEndpoint="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" Jul 11 00:13:47.653221 containerd[1532]: 2025-07-11 00:13:47.625 [INFO][4123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Namespace="calico-system" Pod="whisker-5b45cff48f-9mtxg" WorkloadEndpoint="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" Jul 11 00:13:47.653221 containerd[1532]: 2025-07-11 00:13:47.626 [INFO][4123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Namespace="calico-system" Pod="whisker-5b45cff48f-9mtxg" WorkloadEndpoint="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b45cff48f--9mtxg-eth0", GenerateName:"whisker-5b45cff48f-", Namespace:"calico-system", SelfLink:"", UID:"650e4f0d-82ff-49e9-b369-fe9f2fea9d28", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b45cff48f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59", Pod:"whisker-5b45cff48f-9mtxg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9367e7d5b6e", MAC:"ce:2e:9e:f4:f4:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:47.653221 containerd[1532]: 2025-07-11 00:13:47.641 [INFO][4123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59" Namespace="calico-system" Pod="whisker-5b45cff48f-9mtxg" WorkloadEndpoint="localhost-k8s-whisker--5b45cff48f--9mtxg-eth0" Jul 11 00:13:47.699954 containerd[1532]: time="2025-07-11T00:13:47.697839328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:47.699954 containerd[1532]: time="2025-07-11T00:13:47.699044391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:47.699954 containerd[1532]: time="2025-07-11T00:13:47.699057021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:47.699954 containerd[1532]: time="2025-07-11T00:13:47.699858708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:47.723156 systemd[1]: Started cri-containerd-9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59.scope - libcontainer container 9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59. Jul 11 00:13:47.747575 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:13:47.783664 containerd[1532]: time="2025-07-11T00:13:47.783040744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b45cff48f-9mtxg,Uid:650e4f0d-82ff-49e9-b369-fe9f2fea9d28,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59\"" Jul 11 00:13:47.788037 containerd[1532]: time="2025-07-11T00:13:47.787778390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:13:47.818976 systemd-networkd[1443]: vxlan.calico: Link UP Jul 11 00:13:47.818981 systemd-networkd[1443]: vxlan.calico: Gained carrier Jul 11 00:13:48.677142 systemd-networkd[1443]: cali9367e7d5b6e: Gained IPv6LL Jul 11 00:13:48.997092 systemd-networkd[1443]: vxlan.calico: Gained IPv6LL Jul 11 00:13:49.403657 containerd[1532]: time="2025-07-11T00:13:49.403574762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:49.404659 containerd[1532]: time="2025-07-11T00:13:49.404535391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 00:13:49.406356 containerd[1532]: time="2025-07-11T00:13:49.406318257Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:49.408676 containerd[1532]: time="2025-07-11T00:13:49.408600906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.620793903s" Jul 11 00:13:49.408676 containerd[1532]: time="2025-07-11T00:13:49.408623288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 00:13:49.409087 containerd[1532]: time="2025-07-11T00:13:49.408740781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:49.410833 containerd[1532]: time="2025-07-11T00:13:49.410812315Z" level=info msg="CreateContainer within sandbox \"9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:13:49.416868 containerd[1532]: time="2025-07-11T00:13:49.416838446Z" level=info msg="CreateContainer within sandbox \"9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c0390889d358ac6fe27fa56978e5ffd23471fe7092cfb796497c29cb32fe6d29\"" Jul 11 00:13:49.417491 containerd[1532]: time="2025-07-11T00:13:49.417249452Z" level=info msg="StartContainer for \"c0390889d358ac6fe27fa56978e5ffd23471fe7092cfb796497c29cb32fe6d29\"" Jul 11 00:13:49.442127 systemd[1]: Started cri-containerd-c0390889d358ac6fe27fa56978e5ffd23471fe7092cfb796497c29cb32fe6d29.scope - libcontainer container c0390889d358ac6fe27fa56978e5ffd23471fe7092cfb796497c29cb32fe6d29. Jul 11 00:13:49.514468 containerd[1532]: time="2025-07-11T00:13:49.514420815Z" level=info msg="StartContainer for \"c0390889d358ac6fe27fa56978e5ffd23471fe7092cfb796497c29cb32fe6d29\" returns successfully" Jul 11 00:13:49.515977 containerd[1532]: time="2025-07-11T00:13:49.515961977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:13:51.402131 containerd[1532]: time="2025-07-11T00:13:51.402087477Z" level=info msg="StopPodSandbox for \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\"" Jul 11 00:13:51.403568 containerd[1532]: time="2025-07-11T00:13:51.402795588Z" level=info msg="StopPodSandbox for \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\"" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.453 [INFO][4368] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.454 [INFO][4368] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" iface="eth0" netns="/var/run/netns/cni-2080c3f8-7198-5ce1-5618-35583c04b133" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.456 [INFO][4368] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" iface="eth0" netns="/var/run/netns/cni-2080c3f8-7198-5ce1-5618-35583c04b133" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.459 [INFO][4368] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" iface="eth0" netns="/var/run/netns/cni-2080c3f8-7198-5ce1-5618-35583c04b133" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.459 [INFO][4368] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.459 [INFO][4368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.502 [INFO][4382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" HandleID="k8s-pod-network.69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.502 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.502 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.519 [WARNING][4382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" HandleID="k8s-pod-network.69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.519 [INFO][4382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" HandleID="k8s-pod-network.69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.520 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:51.526345 containerd[1532]: 2025-07-11 00:13:51.522 [INFO][4368] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:13:51.531311 containerd[1532]: time="2025-07-11T00:13:51.526937308Z" level=info msg="TearDown network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\" successfully" Jul 11 00:13:51.531311 containerd[1532]: time="2025-07-11T00:13:51.526956852Z" level=info msg="StopPodSandbox for \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\" returns successfully" Jul 11 00:13:51.531311 containerd[1532]: time="2025-07-11T00:13:51.530137903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sx2g9,Uid:3f974f17-07d9-43c5-843d-8f77256391bc,Namespace:kube-system,Attempt:1,}" Jul 11 00:13:51.529817 systemd[1]: run-netns-cni\x2d2080c3f8\x2d7198\x2d5ce1\x2d5618\x2d35583c04b133.mount: Deactivated successfully. Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.459 [INFO][4369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.460 [INFO][4369] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" iface="eth0" netns="/var/run/netns/cni-f387ba52-e243-c4f9-514d-945172b3a285" Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.460 [INFO][4369] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" iface="eth0" netns="/var/run/netns/cni-f387ba52-e243-c4f9-514d-945172b3a285" Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.461 [INFO][4369] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" iface="eth0" netns="/var/run/netns/cni-f387ba52-e243-c4f9-514d-945172b3a285" Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.461 [INFO][4369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.461 [INFO][4369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.511 [INFO][4384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" HandleID="k8s-pod-network.63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.511 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.520 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.526 [WARNING][4384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" HandleID="k8s-pod-network.63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.526 [INFO][4384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" HandleID="k8s-pod-network.63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.532 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:51.544355 containerd[1532]: 2025-07-11 00:13:51.536 [INFO][4369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:13:51.544355 containerd[1532]: time="2025-07-11T00:13:51.544088324Z" level=info msg="TearDown network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\" successfully" Jul 11 00:13:51.544355 containerd[1532]: time="2025-07-11T00:13:51.544104885Z" level=info msg="StopPodSandbox for \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\" returns successfully" Jul 11 00:13:51.549539 containerd[1532]: time="2025-07-11T00:13:51.547960353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc67c5569-2wqmp,Uid:7cae885b-c99b-4b29-a6a7-210ea001e884,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:13:51.549144 systemd[1]: run-netns-cni\x2df387ba52\x2de243\x2dc4f9\x2d514d\x2d945172b3a285.mount: Deactivated successfully. Jul 11 00:13:51.677060 systemd-networkd[1443]: calib660e69b0a3: Link UP Jul 11 00:13:51.679456 systemd-networkd[1443]: calib660e69b0a3: Gained carrier Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.585 [INFO][4396] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0 coredns-668d6bf9bc- kube-system 3f974f17-07d9-43c5-843d-8f77256391bc 884 0 2025-07-11 00:13:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-sx2g9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib660e69b0a3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Namespace="kube-system" Pod="coredns-668d6bf9bc-sx2g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sx2g9-" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.585 [INFO][4396] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Namespace="kube-system" Pod="coredns-668d6bf9bc-sx2g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.621 [INFO][4417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" HandleID="k8s-pod-network.fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.621 [INFO][4417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" HandleID="k8s-pod-network.fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-sx2g9", "timestamp":"2025-07-11 00:13:51.619911508 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.621 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.621 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.621 [INFO][4417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.634 [INFO][4417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" host="localhost" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.649 [INFO][4417] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.653 [INFO][4417] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.654 [INFO][4417] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.655 [INFO][4417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.655 [INFO][4417] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" host="localhost" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.657 [INFO][4417] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509 Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.662 [INFO][4417] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" host="localhost" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.667 [INFO][4417] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" host="localhost" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.667 [INFO][4417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" host="localhost" Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.667 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:51.693466 containerd[1532]: 2025-07-11 00:13:51.667 [INFO][4417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" HandleID="k8s-pod-network.fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.694284 containerd[1532]: 2025-07-11 00:13:51.671 [INFO][4396] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Namespace="kube-system" Pod="coredns-668d6bf9bc-sx2g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f974f17-07d9-43c5-843d-8f77256391bc", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-sx2g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib660e69b0a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:51.694284 containerd[1532]: 2025-07-11 00:13:51.671 [INFO][4396] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Namespace="kube-system" Pod="coredns-668d6bf9bc-sx2g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.694284 containerd[1532]: 2025-07-11 00:13:51.671 [INFO][4396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib660e69b0a3 ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Namespace="kube-system" Pod="coredns-668d6bf9bc-sx2g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.694284 containerd[1532]: 2025-07-11 00:13:51.678 [INFO][4396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Namespace="kube-system" Pod="coredns-668d6bf9bc-sx2g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.694284 containerd[1532]: 2025-07-11 00:13:51.679 [INFO][4396] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Namespace="kube-system" Pod="coredns-668d6bf9bc-sx2g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f974f17-07d9-43c5-843d-8f77256391bc", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509", Pod:"coredns-668d6bf9bc-sx2g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib660e69b0a3", MAC:"a6:47:4c:85:86:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:51.694284 containerd[1532]: 2025-07-11 00:13:51.691 [INFO][4396] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509" Namespace="kube-system" Pod="coredns-668d6bf9bc-sx2g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:13:51.714398 containerd[1532]: time="2025-07-11T00:13:51.714131816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:51.714798 containerd[1532]: time="2025-07-11T00:13:51.714505442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:51.714798 containerd[1532]: time="2025-07-11T00:13:51.714519637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:51.714798 containerd[1532]: time="2025-07-11T00:13:51.714563990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:51.750150 systemd[1]: Started cri-containerd-fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509.scope - libcontainer container fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509. Jul 11 00:13:51.767166 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:13:51.804593 systemd-networkd[1443]: cali2afa3b17ce5: Link UP Jul 11 00:13:51.805639 systemd-networkd[1443]: cali2afa3b17ce5: Gained carrier Jul 11 00:13:51.817529 containerd[1532]: time="2025-07-11T00:13:51.817500939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sx2g9,Uid:3f974f17-07d9-43c5-843d-8f77256391bc,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509\"" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.604 [INFO][4406] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0 calico-apiserver-dc67c5569- calico-apiserver 7cae885b-c99b-4b29-a6a7-210ea001e884 885 0 2025-07-11 00:13:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dc67c5569 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dc67c5569-2wqmp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2afa3b17ce5 [] [] }} ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-2wqmp" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.604 [INFO][4406] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-2wqmp" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.634 [INFO][4425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" HandleID="k8s-pod-network.60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.634 [INFO][4425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" HandleID="k8s-pod-network.60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dc67c5569-2wqmp", "timestamp":"2025-07-11 00:13:51.634134553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.634 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.667 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.667 [INFO][4425] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.734 [INFO][4425] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" host="localhost" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.750 [INFO][4425] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.758 [INFO][4425] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.762 [INFO][4425] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.766 [INFO][4425] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.767 [INFO][4425] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" host="localhost" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.770 [INFO][4425] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404 Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.786 [INFO][4425] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" host="localhost" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.793 [INFO][4425] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" host="localhost" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.793 [INFO][4425] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" host="localhost" Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.793 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:51.828911 containerd[1532]: 2025-07-11 00:13:51.794 [INFO][4425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" HandleID="k8s-pod-network.60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.829443 containerd[1532]: 2025-07-11 00:13:51.796 [INFO][4406] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-2wqmp" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0", GenerateName:"calico-apiserver-dc67c5569-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cae885b-c99b-4b29-a6a7-210ea001e884", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc67c5569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dc67c5569-2wqmp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2afa3b17ce5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:51.829443 containerd[1532]: 2025-07-11 00:13:51.796 [INFO][4406] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-2wqmp" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.829443 containerd[1532]: 2025-07-11 00:13:51.796 [INFO][4406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2afa3b17ce5 ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-2wqmp" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.829443 containerd[1532]: 2025-07-11 00:13:51.805 [INFO][4406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-2wqmp" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.829443 containerd[1532]: 2025-07-11 00:13:51.807 [INFO][4406] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-2wqmp" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0", GenerateName:"calico-apiserver-dc67c5569-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cae885b-c99b-4b29-a6a7-210ea001e884", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc67c5569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404", Pod:"calico-apiserver-dc67c5569-2wqmp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2afa3b17ce5", MAC:"ca:61:96:2a:e3:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:51.829443 containerd[1532]: 2025-07-11 00:13:51.826 [INFO][4406] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-2wqmp" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:13:51.851510 containerd[1532]: time="2025-07-11T00:13:51.851397243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:51.851678 containerd[1532]: time="2025-07-11T00:13:51.851430926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:51.851678 containerd[1532]: time="2025-07-11T00:13:51.851449220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:51.851678 containerd[1532]: time="2025-07-11T00:13:51.851498494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:51.865105 systemd[1]: Started cri-containerd-60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404.scope - libcontainer container 60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404. Jul 11 00:13:51.877042 containerd[1532]: time="2025-07-11T00:13:51.877018663Z" level=info msg="CreateContainer within sandbox \"fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:13:51.878571 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:13:51.914485 containerd[1532]: time="2025-07-11T00:13:51.914375840Z" level=info msg="CreateContainer within sandbox \"fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eed982248c7d1f5c9f08c6d68159aa3294d13567914d17f7076f3a9d72fcef7f\"" Jul 11 00:13:51.915342 containerd[1532]: time="2025-07-11T00:13:51.915320052Z" level=info msg="StartContainer for \"eed982248c7d1f5c9f08c6d68159aa3294d13567914d17f7076f3a9d72fcef7f\"" Jul 11 00:13:51.921479 containerd[1532]: time="2025-07-11T00:13:51.921450650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc67c5569-2wqmp,Uid:7cae885b-c99b-4b29-a6a7-210ea001e884,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404\"" Jul 11 00:13:51.948213 systemd[1]: Started cri-containerd-eed982248c7d1f5c9f08c6d68159aa3294d13567914d17f7076f3a9d72fcef7f.scope - libcontainer container eed982248c7d1f5c9f08c6d68159aa3294d13567914d17f7076f3a9d72fcef7f. Jul 11 00:13:52.086849 containerd[1532]: time="2025-07-11T00:13:52.086816895Z" level=info msg="StartContainer for \"eed982248c7d1f5c9f08c6d68159aa3294d13567914d17f7076f3a9d72fcef7f\" returns successfully" Jul 11 00:13:52.178617 containerd[1532]: time="2025-07-11T00:13:52.178577041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:52.186473 containerd[1532]: time="2025-07-11T00:13:52.186354493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 00:13:52.193384 containerd[1532]: time="2025-07-11T00:13:52.193340259Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:52.201296 containerd[1532]: time="2025-07-11T00:13:52.201204538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:52.202115 containerd[1532]: time="2025-07-11T00:13:52.202002687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.685903953s" Jul 11 00:13:52.202115 containerd[1532]: time="2025-07-11T00:13:52.202040481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 00:13:52.202969 containerd[1532]: time="2025-07-11T00:13:52.202844275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:13:52.204031 containerd[1532]: time="2025-07-11T00:13:52.203939557Z" level=info msg="CreateContainer within sandbox \"9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:13:52.281671 containerd[1532]: time="2025-07-11T00:13:52.281636954Z" level=info msg="CreateContainer within sandbox \"9c7134974092156921dd2f0b3793bfa91431e07c73dcfbd312cceb0e975f6e59\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fcc68982414a18062dd83e4e79fc9ef5a5055c1bfd0effb9dbf3129f9ba44da0\"" Jul 11 00:13:52.282333 containerd[1532]: time="2025-07-11T00:13:52.282078372Z" level=info msg="StartContainer for \"fcc68982414a18062dd83e4e79fc9ef5a5055c1bfd0effb9dbf3129f9ba44da0\"" Jul 11 00:13:52.304132 systemd[1]: Started cri-containerd-fcc68982414a18062dd83e4e79fc9ef5a5055c1bfd0effb9dbf3129f9ba44da0.scope - libcontainer container fcc68982414a18062dd83e4e79fc9ef5a5055c1bfd0effb9dbf3129f9ba44da0. Jul 11 00:13:52.315716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316203433.mount: Deactivated successfully. Jul 11 00:13:52.351997 containerd[1532]: time="2025-07-11T00:13:52.351942782Z" level=info msg="StartContainer for \"fcc68982414a18062dd83e4e79fc9ef5a5055c1bfd0effb9dbf3129f9ba44da0\" returns successfully" Jul 11 00:13:52.399748 containerd[1532]: time="2025-07-11T00:13:52.399518587Z" level=info msg="StopPodSandbox for \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\"" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.434 [INFO][4622] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.434 [INFO][4622] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" iface="eth0" netns="/var/run/netns/cni-7d990eb7-6bd6-361d-0ab7-8ec4855328f4" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.435 [INFO][4622] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" iface="eth0" netns="/var/run/netns/cni-7d990eb7-6bd6-361d-0ab7-8ec4855328f4" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.435 [INFO][4622] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" iface="eth0" netns="/var/run/netns/cni-7d990eb7-6bd6-361d-0ab7-8ec4855328f4" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.435 [INFO][4622] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.435 [INFO][4622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.456 [INFO][4629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" HandleID="k8s-pod-network.85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.456 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.456 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.461 [WARNING][4629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" HandleID="k8s-pod-network.85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.461 [INFO][4629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" HandleID="k8s-pod-network.85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.463 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:52.469593 containerd[1532]: 2025-07-11 00:13:52.466 [INFO][4622] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:13:52.469593 containerd[1532]: time="2025-07-11T00:13:52.469274169Z" level=info msg="TearDown network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\" successfully" Jul 11 00:13:52.471310 containerd[1532]: time="2025-07-11T00:13:52.471123104Z" level=info msg="StopPodSandbox for \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\" returns successfully" Jul 11 00:13:52.472979 systemd[1]: run-netns-cni\x2d7d990eb7\x2d6bd6\x2d361d\x2d0ab7\x2d8ec4855328f4.mount: Deactivated successfully. Jul 11 00:13:52.483734 containerd[1532]: time="2025-07-11T00:13:52.483491976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9c6d6,Uid:65805cee-bfb6-4749-bbfc-8e9405f90c70,Namespace:calico-system,Attempt:1,}" Jul 11 00:13:52.588165 systemd-networkd[1443]: calie1b6715502f: Link UP Jul 11 00:13:52.589245 systemd-networkd[1443]: calie1b6715502f: Gained carrier Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.531 [INFO][4639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0 goldmane-768f4c5c69- calico-system 65805cee-bfb6-4749-bbfc-8e9405f90c70 903 0 2025-07-11 00:13:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-9c6d6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie1b6715502f [] [] }} ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Namespace="calico-system" Pod="goldmane-768f4c5c69-9c6d6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9c6d6-" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.534 [INFO][4639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Namespace="calico-system" Pod="goldmane-768f4c5c69-9c6d6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.556 [INFO][4647] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" HandleID="k8s-pod-network.a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.556 [INFO][4647] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" HandleID="k8s-pod-network.a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-9c6d6", "timestamp":"2025-07-11 00:13:52.556214504 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.556 [INFO][4647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.556 [INFO][4647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.556 [INFO][4647] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.560 [INFO][4647] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" host="localhost" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.563 [INFO][4647] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.566 [INFO][4647] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.567 [INFO][4647] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.568 [INFO][4647] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.568 [INFO][4647] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" host="localhost" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.569 [INFO][4647] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.571 [INFO][4647] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" host="localhost" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.578 [INFO][4647] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" host="localhost" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.578 [INFO][4647] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" host="localhost" Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.578 [INFO][4647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:52.607906 containerd[1532]: 2025-07-11 00:13:52.578 [INFO][4647] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" HandleID="k8s-pod-network.a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.621188 containerd[1532]: 2025-07-11 00:13:52.584 [INFO][4639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Namespace="calico-system" Pod="goldmane-768f4c5c69-9c6d6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"65805cee-bfb6-4749-bbfc-8e9405f90c70", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-9c6d6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1b6715502f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:52.621188 containerd[1532]: 2025-07-11 00:13:52.584 [INFO][4639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Namespace="calico-system" Pod="goldmane-768f4c5c69-9c6d6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.621188 containerd[1532]: 2025-07-11 00:13:52.584 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1b6715502f ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Namespace="calico-system" Pod="goldmane-768f4c5c69-9c6d6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.621188 containerd[1532]: 2025-07-11 00:13:52.589 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Namespace="calico-system" Pod="goldmane-768f4c5c69-9c6d6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.621188 containerd[1532]: 2025-07-11 00:13:52.589 [INFO][4639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Namespace="calico-system" Pod="goldmane-768f4c5c69-9c6d6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"65805cee-bfb6-4749-bbfc-8e9405f90c70", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af", Pod:"goldmane-768f4c5c69-9c6d6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1b6715502f", MAC:"46:28:82:b5:bf:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:52.621188 containerd[1532]: 2025-07-11 00:13:52.604 [INFO][4639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af" Namespace="calico-system" Pod="goldmane-768f4c5c69-9c6d6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:13:52.651365 containerd[1532]: time="2025-07-11T00:13:52.651112925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:52.651365 containerd[1532]: time="2025-07-11T00:13:52.651145197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:52.651365 containerd[1532]: time="2025-07-11T00:13:52.651155149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:52.651715 containerd[1532]: time="2025-07-11T00:13:52.651654901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:52.680163 systemd[1]: Started cri-containerd-a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af.scope - libcontainer container a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af. Jul 11 00:13:52.691044 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:13:52.717542 containerd[1532]: time="2025-07-11T00:13:52.717489168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9c6d6,Uid:65805cee-bfb6-4749-bbfc-8e9405f90c70,Namespace:calico-system,Attempt:1,} returns sandbox id \"a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af\"" Jul 11 00:13:52.830886 kubelet[2724]: I0711 00:13:52.830680 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sx2g9" podStartSLOduration=34.83066584 podStartE2EDuration="34.83066584s" podCreationTimestamp="2025-07-11 00:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:52.808452925 +0000 UTC m=+39.605865806" watchObservedRunningTime="2025-07-11 00:13:52.83066584 +0000 UTC m=+39.628078725" Jul 11 00:13:53.093393 systemd-networkd[1443]: calib660e69b0a3: Gained IPv6LL Jul 11 00:13:53.402276 containerd[1532]: time="2025-07-11T00:13:53.401724030Z" level=info msg="StopPodSandbox for \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\"" Jul 11 00:13:53.403470 containerd[1532]: time="2025-07-11T00:13:53.402722081Z" level=info msg="StopPodSandbox for \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\"" Jul 11 00:13:53.461104 kubelet[2724]: I0711 00:13:53.460874 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b45cff48f-9mtxg" podStartSLOduration=3.042215073 podStartE2EDuration="7.460858792s" podCreationTimestamp="2025-07-11 00:13:46 +0000 UTC" firstStartedPulling="2025-07-11 00:13:47.783981963 +0000 UTC m=+34.581394839" lastFinishedPulling="2025-07-11 00:13:52.202625677 +0000 UTC m=+39.000038558" observedRunningTime="2025-07-11 00:13:52.832072232 +0000 UTC m=+39.629485122" watchObservedRunningTime="2025-07-11 00:13:53.460858792 +0000 UTC m=+40.258271679" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.463 [INFO][4737] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.463 [INFO][4737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" iface="eth0" netns="/var/run/netns/cni-67049671-2fd4-abaa-534a-4a066e2a2247" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.463 [INFO][4737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" iface="eth0" netns="/var/run/netns/cni-67049671-2fd4-abaa-534a-4a066e2a2247" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.463 [INFO][4737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" iface="eth0" netns="/var/run/netns/cni-67049671-2fd4-abaa-534a-4a066e2a2247" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.463 [INFO][4737] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.463 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.490 [INFO][4752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" HandleID="k8s-pod-network.e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.490 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.490 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.497 [WARNING][4752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" HandleID="k8s-pod-network.e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.497 [INFO][4752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" HandleID="k8s-pod-network.e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.498 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:53.503713 containerd[1532]: 2025-07-11 00:13:53.500 [INFO][4737] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:13:53.515330 containerd[1532]: time="2025-07-11T00:13:53.505634307Z" level=info msg="TearDown network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\" successfully" Jul 11 00:13:53.515330 containerd[1532]: time="2025-07-11T00:13:53.505776021Z" level=info msg="StopPodSandbox for \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\" returns successfully" Jul 11 00:13:53.515330 containerd[1532]: time="2025-07-11T00:13:53.506334118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc67c5569-crf45,Uid:f28665de-a757-4ccc-8a19-96a88f8187af,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:13:53.506205 systemd[1]: run-netns-cni\x2d67049671\x2d2fd4\x2dabaa\x2d534a\x2d4a066e2a2247.mount: Deactivated successfully. Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.459 [INFO][4738] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.459 [INFO][4738] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" iface="eth0" netns="/var/run/netns/cni-c3f5ce55-cec5-33a7-9e7d-650360bd2f99" Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.459 [INFO][4738] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" iface="eth0" netns="/var/run/netns/cni-c3f5ce55-cec5-33a7-9e7d-650360bd2f99" Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.461 [INFO][4738] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" iface="eth0" netns="/var/run/netns/cni-c3f5ce55-cec5-33a7-9e7d-650360bd2f99" Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.462 [INFO][4738] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.462 [INFO][4738] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.508 [INFO][4750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" HandleID="k8s-pod-network.d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.508 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.508 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.524 [WARNING][4750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" HandleID="k8s-pod-network.d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.525 [INFO][4750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" HandleID="k8s-pod-network.d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.526 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:53.531410 containerd[1532]: 2025-07-11 00:13:53.528 [INFO][4738] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:13:53.531410 containerd[1532]: time="2025-07-11T00:13:53.529625174Z" level=info msg="TearDown network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\" successfully" Jul 11 00:13:53.531410 containerd[1532]: time="2025-07-11T00:13:53.529654481Z" level=info msg="StopPodSandbox for \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\" returns successfully" Jul 11 00:13:53.531410 containerd[1532]: time="2025-07-11T00:13:53.530105442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6474,Uid:c569219a-53af-4571-883f-9b7bfe060437,Namespace:kube-system,Attempt:1,}" Jul 11 00:13:53.532201 systemd[1]: run-netns-cni\x2dc3f5ce55\x2dcec5\x2d33a7\x2d9e7d\x2d650360bd2f99.mount: Deactivated successfully. Jul 11 00:13:53.759970 systemd-networkd[1443]: cali2522ea36ebf: Link UP Jul 11 00:13:53.760133 systemd-networkd[1443]: cali2522ea36ebf: Gained carrier Jul 11 00:13:53.797281 systemd-networkd[1443]: cali2afa3b17ce5: Gained IPv6LL Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.667 [INFO][4764] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0 calico-apiserver-dc67c5569- calico-apiserver f28665de-a757-4ccc-8a19-96a88f8187af 922 0 2025-07-11 00:13:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dc67c5569 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dc67c5569-crf45 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2522ea36ebf [] [] }} ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-crf45" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--crf45-" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.667 [INFO][4764] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-crf45" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.713 [INFO][4782] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" HandleID="k8s-pod-network.eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.713 [INFO][4782] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" HandleID="k8s-pod-network.eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dc67c5569-crf45", "timestamp":"2025-07-11 00:13:53.713237592 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.713 [INFO][4782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.713 [INFO][4782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.713 [INFO][4782] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.719 [INFO][4782] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" host="localhost" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.722 [INFO][4782] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.724 [INFO][4782] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.724 [INFO][4782] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.725 [INFO][4782] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.725 [INFO][4782] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" host="localhost" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.726 [INFO][4782] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3 Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.739 [INFO][4782] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" host="localhost" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.752 [INFO][4782] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" host="localhost" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.752 [INFO][4782] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" host="localhost" Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.752 [INFO][4782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:53.834835 containerd[1532]: 2025-07-11 00:13:53.752 [INFO][4782] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" HandleID="k8s-pod-network.eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.846871 containerd[1532]: 2025-07-11 00:13:53.756 [INFO][4764] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-crf45" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0", GenerateName:"calico-apiserver-dc67c5569-", Namespace:"calico-apiserver", SelfLink:"", UID:"f28665de-a757-4ccc-8a19-96a88f8187af", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc67c5569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dc67c5569-crf45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2522ea36ebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:53.846871 containerd[1532]: 2025-07-11 00:13:53.756 [INFO][4764] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-crf45" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.846871 containerd[1532]: 2025-07-11 00:13:53.756 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2522ea36ebf ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-crf45" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.846871 containerd[1532]: 2025-07-11 00:13:53.760 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-crf45" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.846871 containerd[1532]: 2025-07-11 00:13:53.761 [INFO][4764] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-crf45" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0", GenerateName:"calico-apiserver-dc67c5569-", Namespace:"calico-apiserver", SelfLink:"", UID:"f28665de-a757-4ccc-8a19-96a88f8187af", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc67c5569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3", Pod:"calico-apiserver-dc67c5569-crf45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2522ea36ebf", MAC:"96:5c:93:dc:52:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:53.846871 containerd[1532]: 2025-07-11 00:13:53.833 [INFO][4764] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3" Namespace="calico-apiserver" Pod="calico-apiserver-dc67c5569-crf45" WorkloadEndpoint="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:13:53.995542 systemd-networkd[1443]: calib70b5dec66a: Link UP Jul 11 00:13:53.999119 systemd-networkd[1443]: calib70b5dec66a: Gained carrier Jul 11 00:13:54.026466 containerd[1532]: time="2025-07-11T00:13:54.026250975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:54.027644 containerd[1532]: time="2025-07-11T00:13:54.027268393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:54.027720 containerd[1532]: time="2025-07-11T00:13:54.027706898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:54.027889 containerd[1532]: time="2025-07-11T00:13:54.027812137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.755 [INFO][4768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--m6474-eth0 coredns-668d6bf9bc- kube-system c569219a-53af-4571-883f-9b7bfe060437 921 0 2025-07-11 00:13:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-m6474 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib70b5dec66a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6474" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6474-" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.755 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6474" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.788 [INFO][4796] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" HandleID="k8s-pod-network.fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.788 [INFO][4796] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" HandleID="k8s-pod-network.fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5030), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-m6474", "timestamp":"2025-07-11 00:13:53.788575975 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.788 [INFO][4796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.788 [INFO][4796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.788 [INFO][4796] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.859 [INFO][4796] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" host="localhost" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.879 [INFO][4796] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.896 [INFO][4796] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.901 [INFO][4796] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.946 [INFO][4796] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.946 [INFO][4796] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" host="localhost" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.956 [INFO][4796] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.966 [INFO][4796] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" host="localhost" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.987 [INFO][4796] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" host="localhost" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.987 [INFO][4796] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" host="localhost" Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.987 [INFO][4796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:54.042392 containerd[1532]: 2025-07-11 00:13:53.987 [INFO][4796] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" HandleID="k8s-pod-network.fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:54.047290 containerd[1532]: 2025-07-11 00:13:53.989 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6474" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m6474-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c569219a-53af-4571-883f-9b7bfe060437", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-m6474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib70b5dec66a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:54.047290 containerd[1532]: 2025-07-11 00:13:53.989 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6474" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:54.047290 containerd[1532]: 2025-07-11 00:13:53.989 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib70b5dec66a ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6474" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:54.047290 containerd[1532]: 2025-07-11 00:13:53.997 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6474" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:54.047290 containerd[1532]: 2025-07-11 00:13:54.003 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6474" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m6474-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c569219a-53af-4571-883f-9b7bfe060437", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af", Pod:"coredns-668d6bf9bc-m6474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib70b5dec66a", MAC:"26:45:46:9e:b8:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:54.047290 containerd[1532]: 2025-07-11 00:13:54.039 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6474" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:13:54.044155 systemd[1]: Started cri-containerd-eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3.scope - libcontainer container eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3. Jul 11 00:13:54.063523 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:13:54.072565 containerd[1532]: time="2025-07-11T00:13:54.072420074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:54.072565 containerd[1532]: time="2025-07-11T00:13:54.072452015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:54.072565 containerd[1532]: time="2025-07-11T00:13:54.072461697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:54.072565 containerd[1532]: time="2025-07-11T00:13:54.072512051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:54.086132 systemd[1]: Started cri-containerd-fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af.scope - libcontainer container fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af. Jul 11 00:13:54.101069 containerd[1532]: time="2025-07-11T00:13:54.101041793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc67c5569-crf45,Uid:f28665de-a757-4ccc-8a19-96a88f8187af,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3\"" Jul 11 00:13:54.111594 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:13:54.134391 containerd[1532]: time="2025-07-11T00:13:54.134195858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6474,Uid:c569219a-53af-4571-883f-9b7bfe060437,Namespace:kube-system,Attempt:1,} returns sandbox id \"fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af\"" Jul 11 00:13:54.142969 containerd[1532]: time="2025-07-11T00:13:54.142861881Z" level=info msg="CreateContainer within sandbox \"fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:13:54.398909 containerd[1532]: time="2025-07-11T00:13:54.398873995Z" level=info msg="CreateContainer within sandbox \"fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"051c0a1a4107757c84c45fa0dac95f680c77d489cd9f66ba8bb8676037ce3d53\"" Jul 11 00:13:54.399915 containerd[1532]: time="2025-07-11T00:13:54.399876852Z" level=info msg="StartContainer for \"051c0a1a4107757c84c45fa0dac95f680c77d489cd9f66ba8bb8676037ce3d53\"" Jul 11 00:13:54.400122 containerd[1532]: time="2025-07-11T00:13:54.400091208Z" level=info msg="StopPodSandbox for \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\"" Jul 11 00:13:54.413493 containerd[1532]: time="2025-07-11T00:13:54.412806480Z" level=info msg="StopPodSandbox for \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\"" Jul 11 00:13:54.426115 systemd[1]: Started cri-containerd-051c0a1a4107757c84c45fa0dac95f680c77d489cd9f66ba8bb8676037ce3d53.scope - libcontainer container 051c0a1a4107757c84c45fa0dac95f680c77d489cd9f66ba8bb8676037ce3d53. Jul 11 00:13:54.438692 systemd-networkd[1443]: calie1b6715502f: Gained IPv6LL Jul 11 00:13:54.451476 containerd[1532]: time="2025-07-11T00:13:54.451446845Z" level=info msg="StartContainer for \"051c0a1a4107757c84c45fa0dac95f680c77d489cd9f66ba8bb8676037ce3d53\" returns successfully" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.542 [INFO][4952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.542 [INFO][4952] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" iface="eth0" netns="/var/run/netns/cni-42dbb5ce-714f-fe79-d954-5484bda5b6ef" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.543 [INFO][4952] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" iface="eth0" netns="/var/run/netns/cni-42dbb5ce-714f-fe79-d954-5484bda5b6ef" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.543 [INFO][4952] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" iface="eth0" netns="/var/run/netns/cni-42dbb5ce-714f-fe79-d954-5484bda5b6ef" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.543 [INFO][4952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.544 [INFO][4952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.615 [INFO][4978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" HandleID="k8s-pod-network.ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.615 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.615 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.619 [WARNING][4978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" HandleID="k8s-pod-network.ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.619 [INFO][4978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" HandleID="k8s-pod-network.ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.620 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:54.624819 containerd[1532]: 2025-07-11 00:13:54.621 [INFO][4952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:13:54.635262 containerd[1532]: time="2025-07-11T00:13:54.625858357Z" level=info msg="TearDown network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\" successfully" Jul 11 00:13:54.635262 containerd[1532]: time="2025-07-11T00:13:54.625880497Z" level=info msg="StopPodSandbox for \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\" returns successfully" Jul 11 00:13:54.635262 containerd[1532]: time="2025-07-11T00:13:54.626526464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7988bb64-5mrdt,Uid:7e0575ec-5d0d-46a3-9f3a-19d2440f8d60,Namespace:calico-system,Attempt:1,}" Jul 11 00:13:54.627363 systemd[1]: run-netns-cni\x2d42dbb5ce\x2d714f\x2dfe79\x2dd954\x2d5484bda5b6ef.mount: Deactivated successfully. Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.528 [INFO][4932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.528 [INFO][4932] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" iface="eth0" netns="/var/run/netns/cni-c137c88b-5f9a-2638-2dd3-bd0816f71760" Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.528 [INFO][4932] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" iface="eth0" netns="/var/run/netns/cni-c137c88b-5f9a-2638-2dd3-bd0816f71760" Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.529 [INFO][4932] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" iface="eth0" netns="/var/run/netns/cni-c137c88b-5f9a-2638-2dd3-bd0816f71760" Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.529 [INFO][4932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.529 [INFO][4932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.615 [INFO][4972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" HandleID="k8s-pod-network.3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.615 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.620 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.646 [WARNING][4972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" HandleID="k8s-pod-network.3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.646 [INFO][4972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" HandleID="k8s-pod-network.3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.647 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:54.649891 containerd[1532]: 2025-07-11 00:13:54.648 [INFO][4932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:13:54.650662 containerd[1532]: time="2025-07-11T00:13:54.650574893Z" level=info msg="TearDown network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\" successfully" Jul 11 00:13:54.650662 containerd[1532]: time="2025-07-11T00:13:54.650602015Z" level=info msg="StopPodSandbox for \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\" returns successfully" Jul 11 00:13:54.651351 containerd[1532]: time="2025-07-11T00:13:54.651307867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p695n,Uid:a987efe9-25c5-4a4f-8880-f0e8c56f315d,Namespace:calico-system,Attempt:1,}" Jul 11 00:13:54.652486 systemd[1]: run-netns-cni\x2dc137c88b\x2d5f9a\x2d2638\x2d2dd3\x2dbd0816f71760.mount: Deactivated successfully. Jul 11 00:13:54.768164 systemd-networkd[1443]: cali383893d59b4: Link UP Jul 11 00:13:54.769242 systemd-networkd[1443]: cali383893d59b4: Gained carrier Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.712 [INFO][4999] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--p695n-eth0 csi-node-driver- calico-system a987efe9-25c5-4a4f-8880-f0e8c56f315d 942 0 2025-07-11 00:13:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-p695n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali383893d59b4 [] [] }} ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Namespace="calico-system" Pod="csi-node-driver-p695n" WorkloadEndpoint="localhost-k8s-csi--node--driver--p695n-" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.712 [INFO][4999] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Namespace="calico-system" Pod="csi-node-driver-p695n" WorkloadEndpoint="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.734 [INFO][5020] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" HandleID="k8s-pod-network.49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.734 [INFO][5020] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" HandleID="k8s-pod-network.49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Workload="localhost-k8s-csi--node--driver--p695n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-p695n", "timestamp":"2025-07-11 00:13:54.733991094 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.734 [INFO][5020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.734 [INFO][5020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.734 [INFO][5020] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.738 [INFO][5020] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" host="localhost" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.741 [INFO][5020] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.745 [INFO][5020] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.746 [INFO][5020] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.747 [INFO][5020] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.747 [INFO][5020] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" host="localhost" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.755 [INFO][5020] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507 Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.757 [INFO][5020] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" host="localhost" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.761 [INFO][5020] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" host="localhost" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.761 [INFO][5020] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" host="localhost" Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.761 [INFO][5020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:54.788864 containerd[1532]: 2025-07-11 00:13:54.761 [INFO][5020] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" HandleID="k8s-pod-network.49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.793303 containerd[1532]: 2025-07-11 00:13:54.764 [INFO][4999] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Namespace="calico-system" Pod="csi-node-driver-p695n" WorkloadEndpoint="localhost-k8s-csi--node--driver--p695n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p695n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a987efe9-25c5-4a4f-8880-f0e8c56f315d", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-p695n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali383893d59b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:54.793303 containerd[1532]: 2025-07-11 00:13:54.764 [INFO][4999] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Namespace="calico-system" Pod="csi-node-driver-p695n" WorkloadEndpoint="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.793303 containerd[1532]: 2025-07-11 00:13:54.764 [INFO][4999] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali383893d59b4 ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Namespace="calico-system" Pod="csi-node-driver-p695n" WorkloadEndpoint="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.793303 containerd[1532]: 2025-07-11 00:13:54.774 [INFO][4999] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Namespace="calico-system" Pod="csi-node-driver-p695n" WorkloadEndpoint="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.793303 containerd[1532]: 2025-07-11 00:13:54.775 [INFO][4999] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Namespace="calico-system" Pod="csi-node-driver-p695n" WorkloadEndpoint="localhost-k8s-csi--node--driver--p695n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p695n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a987efe9-25c5-4a4f-8880-f0e8c56f315d", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507", Pod:"csi-node-driver-p695n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali383893d59b4", MAC:"42:3c:07:32:4f:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:54.793303 containerd[1532]: 2025-07-11 00:13:54.785 [INFO][4999] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507" Namespace="calico-system" Pod="csi-node-driver-p695n" WorkloadEndpoint="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:13:54.827567 containerd[1532]: time="2025-07-11T00:13:54.827485956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:54.827567 containerd[1532]: time="2025-07-11T00:13:54.827560080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:54.827764 containerd[1532]: time="2025-07-11T00:13:54.827584702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:54.827764 containerd[1532]: time="2025-07-11T00:13:54.827663012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:54.833242 kubelet[2724]: I0711 00:13:54.832955 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m6474" podStartSLOduration=36.832935485 podStartE2EDuration="36.832935485s" podCreationTimestamp="2025-07-11 00:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:54.831841231 +0000 UTC m=+41.629254116" watchObservedRunningTime="2025-07-11 00:13:54.832935485 +0000 UTC m=+41.630348365" Jul 11 00:13:54.849185 systemd[1]: Started cri-containerd-49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507.scope - libcontainer container 49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507. Jul 11 00:13:54.862430 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:13:54.879743 containerd[1532]: time="2025-07-11T00:13:54.879182046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p695n,Uid:a987efe9-25c5-4a4f-8880-f0e8c56f315d,Namespace:calico-system,Attempt:1,} returns sandbox id \"49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507\"" Jul 11 00:13:54.899767 systemd-networkd[1443]: cali88c4251ac9f: Link UP Jul 11 00:13:54.902967 systemd-networkd[1443]: cali88c4251ac9f: Gained carrier Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.699 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0 calico-kube-controllers-b7988bb64- calico-system 7e0575ec-5d0d-46a3-9f3a-19d2440f8d60 943 0 2025-07-11 00:13:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b7988bb64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b7988bb64-5mrdt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali88c4251ac9f [] [] }} ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Namespace="calico-system" Pod="calico-kube-controllers-b7988bb64-5mrdt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.699 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Namespace="calico-system" Pod="calico-kube-controllers-b7988bb64-5mrdt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.740 [INFO][5013] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" HandleID="k8s-pod-network.4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.740 [INFO][5013] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" HandleID="k8s-pod-network.4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b7988bb64-5mrdt", "timestamp":"2025-07-11 00:13:54.740323999 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.740 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.761 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.761 [INFO][5013] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.840 [INFO][5013] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" host="localhost" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.847 [INFO][5013] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.860 [INFO][5013] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.872 [INFO][5013] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.876 [INFO][5013] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.876 [INFO][5013] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" host="localhost" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.881 [INFO][5013] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9 Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.888 [INFO][5013] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" host="localhost" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.894 [INFO][5013] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" host="localhost" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.894 [INFO][5013] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" host="localhost" Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.894 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:54.920262 containerd[1532]: 2025-07-11 00:13:54.894 [INFO][5013] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" HandleID="k8s-pod-network.4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.920799 containerd[1532]: 2025-07-11 00:13:54.895 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Namespace="calico-system" Pod="calico-kube-controllers-b7988bb64-5mrdt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0", GenerateName:"calico-kube-controllers-b7988bb64-", Namespace:"calico-system", SelfLink:"", UID:"7e0575ec-5d0d-46a3-9f3a-19d2440f8d60", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7988bb64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b7988bb64-5mrdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88c4251ac9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:54.920799 containerd[1532]: 2025-07-11 00:13:54.895 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Namespace="calico-system" Pod="calico-kube-controllers-b7988bb64-5mrdt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.920799 containerd[1532]: 2025-07-11 00:13:54.896 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88c4251ac9f ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Namespace="calico-system" Pod="calico-kube-controllers-b7988bb64-5mrdt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.920799 containerd[1532]: 2025-07-11 00:13:54.904 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Namespace="calico-system" Pod="calico-kube-controllers-b7988bb64-5mrdt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.920799 containerd[1532]: 2025-07-11 00:13:54.904 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Namespace="calico-system" Pod="calico-kube-controllers-b7988bb64-5mrdt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0", GenerateName:"calico-kube-controllers-b7988bb64-", Namespace:"calico-system", SelfLink:"", UID:"7e0575ec-5d0d-46a3-9f3a-19d2440f8d60", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7988bb64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9", Pod:"calico-kube-controllers-b7988bb64-5mrdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88c4251ac9f", MAC:"62:e1:a3:8f:5d:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:54.920799 containerd[1532]: 2025-07-11 00:13:54.918 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9" Namespace="calico-system" Pod="calico-kube-controllers-b7988bb64-5mrdt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:13:54.939514 containerd[1532]: time="2025-07-11T00:13:54.939377721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:54.939694 containerd[1532]: time="2025-07-11T00:13:54.939466669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:54.939744 containerd[1532]: time="2025-07-11T00:13:54.939682597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:54.939888 containerd[1532]: time="2025-07-11T00:13:54.939843828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:54.957189 systemd[1]: Started cri-containerd-4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9.scope - libcontainer container 4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9. Jul 11 00:13:54.968046 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:13:54.986717 containerd[1532]: time="2025-07-11T00:13:54.986632807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7988bb64-5mrdt,Uid:7e0575ec-5d0d-46a3-9f3a-19d2440f8d60,Namespace:calico-system,Attempt:1,} returns sandbox id \"4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9\"" Jul 11 00:13:55.461254 systemd-networkd[1443]: cali2522ea36ebf: Gained IPv6LL Jul 11 00:13:55.973798 systemd-networkd[1443]: calib70b5dec66a: Gained IPv6LL Jul 11 00:13:56.037232 systemd-networkd[1443]: cali383893d59b4: Gained IPv6LL Jul 11 00:13:56.805157 systemd-networkd[1443]: cali88c4251ac9f: Gained IPv6LL Jul 11 00:13:57.766941 containerd[1532]: time="2025-07-11T00:13:57.766700862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 00:13:57.785290 containerd[1532]: time="2025-07-11T00:13:57.784898314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:57.785512 containerd[1532]: time="2025-07-11T00:13:57.785469310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.581220808s" Jul 11 00:13:57.785512 containerd[1532]: time="2025-07-11T00:13:57.785491926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:13:57.798907 containerd[1532]: time="2025-07-11T00:13:57.798886513Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:57.799329 containerd[1532]: time="2025-07-11T00:13:57.799312575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:57.805248 containerd[1532]: time="2025-07-11T00:13:57.805211832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:13:57.823020 containerd[1532]: time="2025-07-11T00:13:57.821646309Z" level=info msg="CreateContainer within sandbox \"60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:13:57.861446 containerd[1532]: time="2025-07-11T00:13:57.861203303Z" level=info msg="CreateContainer within sandbox \"60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"120272477ff6ad0f49d51bb012c4ae870abd6c7eb25f96cd445a3016906410e5\"" Jul 11 00:13:57.862033 containerd[1532]: time="2025-07-11T00:13:57.861635930Z" level=info msg="StartContainer for \"120272477ff6ad0f49d51bb012c4ae870abd6c7eb25f96cd445a3016906410e5\"" Jul 11 00:13:57.897832 systemd[1]: Started cri-containerd-120272477ff6ad0f49d51bb012c4ae870abd6c7eb25f96cd445a3016906410e5.scope - libcontainer container 120272477ff6ad0f49d51bb012c4ae870abd6c7eb25f96cd445a3016906410e5. Jul 11 00:13:57.946369 containerd[1532]: time="2025-07-11T00:13:57.946288597Z" level=info msg="StartContainer for \"120272477ff6ad0f49d51bb012c4ae870abd6c7eb25f96cd445a3016906410e5\" returns successfully" Jul 11 00:13:59.094030 kubelet[2724]: I0711 00:13:59.093961 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dc67c5569-2wqmp" podStartSLOduration=26.216398412 podStartE2EDuration="32.092784332s" podCreationTimestamp="2025-07-11 00:13:27 +0000 UTC" firstStartedPulling="2025-07-11 00:13:51.922839953 +0000 UTC m=+38.720252833" lastFinishedPulling="2025-07-11 00:13:57.799225873 +0000 UTC m=+44.596638753" observedRunningTime="2025-07-11 00:13:59.077955439 +0000 UTC m=+45.875368325" watchObservedRunningTime="2025-07-11 00:13:59.092784332 +0000 UTC m=+45.890197217" Jul 11 00:14:02.703548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216463487.mount: Deactivated successfully. Jul 11 00:14:03.539523 containerd[1532]: time="2025-07-11T00:14:03.539245362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:03.614332 containerd[1532]: time="2025-07-11T00:14:03.576928653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 00:14:03.641904 containerd[1532]: time="2025-07-11T00:14:03.641880826Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:03.653381 containerd[1532]: time="2025-07-11T00:14:03.653271772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:03.653906 containerd[1532]: time="2025-07-11T00:14:03.653578510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.848345549s" Jul 11 00:14:03.653906 containerd[1532]: time="2025-07-11T00:14:03.653599902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 00:14:03.766206 containerd[1532]: time="2025-07-11T00:14:03.766116755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:14:03.820911 containerd[1532]: time="2025-07-11T00:14:03.820667009Z" level=info msg="CreateContainer within sandbox \"a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:14:03.925942 containerd[1532]: time="2025-07-11T00:14:03.925883778Z" level=info msg="CreateContainer within sandbox \"a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f726df4dc4ccc1b8ff8d9b77f42f19330459c38b4659cd55885109c889bf1ec7\"" Jul 11 00:14:03.955695 containerd[1532]: time="2025-07-11T00:14:03.955658582Z" level=info msg="StartContainer for \"f726df4dc4ccc1b8ff8d9b77f42f19330459c38b4659cd55885109c889bf1ec7\"" Jul 11 00:14:04.071110 systemd[1]: Started cri-containerd-f726df4dc4ccc1b8ff8d9b77f42f19330459c38b4659cd55885109c889bf1ec7.scope - libcontainer container f726df4dc4ccc1b8ff8d9b77f42f19330459c38b4659cd55885109c889bf1ec7. Jul 11 00:14:04.123545 containerd[1532]: time="2025-07-11T00:14:04.123523181Z" level=info msg="StartContainer for \"f726df4dc4ccc1b8ff8d9b77f42f19330459c38b4659cd55885109c889bf1ec7\" returns successfully" Jul 11 00:14:04.189795 containerd[1532]: time="2025-07-11T00:14:04.189036521Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:04.191287 containerd[1532]: time="2025-07-11T00:14:04.190561044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:14:04.195581 containerd[1532]: time="2025-07-11T00:14:04.195449943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 429.305397ms" Jul 11 00:14:04.195581 containerd[1532]: time="2025-07-11T00:14:04.195515170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:14:04.216672 containerd[1532]: time="2025-07-11T00:14:04.216572849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:14:04.222738 containerd[1532]: time="2025-07-11T00:14:04.222697120Z" level=info msg="CreateContainer within sandbox \"eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:14:04.247277 containerd[1532]: time="2025-07-11T00:14:04.247224107Z" level=info msg="CreateContainer within sandbox \"eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eaba2fec73653c30c94671a5c6cedc32928b92f60aeb987c529e7ba9fd0b4435\"" Jul 11 00:14:04.251737 containerd[1532]: time="2025-07-11T00:14:04.251692100Z" level=info msg="StartContainer for \"eaba2fec73653c30c94671a5c6cedc32928b92f60aeb987c529e7ba9fd0b4435\"" Jul 11 00:14:04.287161 systemd[1]: Started cri-containerd-eaba2fec73653c30c94671a5c6cedc32928b92f60aeb987c529e7ba9fd0b4435.scope - libcontainer container eaba2fec73653c30c94671a5c6cedc32928b92f60aeb987c529e7ba9fd0b4435. Jul 11 00:14:04.326077 containerd[1532]: time="2025-07-11T00:14:04.325999331Z" level=info msg="StartContainer for \"eaba2fec73653c30c94671a5c6cedc32928b92f60aeb987c529e7ba9fd0b4435\" returns successfully" Jul 11 00:14:04.456367 kubelet[2724]: I0711 00:14:04.429606 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-9c6d6" podStartSLOduration=24.462119798 podStartE2EDuration="35.393001917s" podCreationTimestamp="2025-07-11 00:13:29 +0000 UTC" firstStartedPulling="2025-07-11 00:13:52.809319558 +0000 UTC m=+39.606732438" lastFinishedPulling="2025-07-11 00:14:03.740201679 +0000 UTC m=+50.537614557" observedRunningTime="2025-07-11 00:14:04.389531153 +0000 UTC m=+51.186944039" watchObservedRunningTime="2025-07-11 00:14:04.393001917 +0000 UTC m=+51.190414798" Jul 11 00:14:04.907082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441843315.mount: Deactivated successfully. Jul 11 00:14:05.286191 kubelet[2724]: I0711 00:14:05.286058 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:14:05.323530 kubelet[2724]: I0711 00:14:05.313418 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dc67c5569-crf45" podStartSLOduration=28.204531315 podStartE2EDuration="38.313404289s" podCreationTimestamp="2025-07-11 00:13:27 +0000 UTC" firstStartedPulling="2025-07-11 00:13:54.105530854 +0000 UTC m=+40.902943737" lastFinishedPulling="2025-07-11 00:14:04.214403829 +0000 UTC m=+51.011816711" observedRunningTime="2025-07-11 00:14:05.31320276 +0000 UTC m=+52.110615646" watchObservedRunningTime="2025-07-11 00:14:05.313404289 +0000 UTC m=+52.110817169" Jul 11 00:14:06.243997 kubelet[2724]: I0711 00:14:06.243962 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:14:06.256852 kubelet[2724]: I0711 00:14:06.256836 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:14:06.387316 containerd[1532]: time="2025-07-11T00:14:06.387229061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:06.388869 containerd[1532]: time="2025-07-11T00:14:06.388794828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 00:14:06.390070 containerd[1532]: time="2025-07-11T00:14:06.389420637Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:06.391920 containerd[1532]: time="2025-07-11T00:14:06.390724732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:06.391920 containerd[1532]: time="2025-07-11T00:14:06.390999018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.174400017s" Jul 11 00:14:06.391920 containerd[1532]: time="2025-07-11T00:14:06.391027909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 00:14:06.394114 containerd[1532]: time="2025-07-11T00:14:06.394098819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:14:06.405901 containerd[1532]: time="2025-07-11T00:14:06.405872485Z" level=info msg="CreateContainer within sandbox \"49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:14:06.427036 containerd[1532]: time="2025-07-11T00:14:06.426938933Z" level=info msg="CreateContainer within sandbox \"49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"878127ec628c1570e94c51f3f3d5f1633d6e9bcb7dee0632d5c88bfd4554b0e5\"" Jul 11 00:14:06.430747 containerd[1532]: time="2025-07-11T00:14:06.430712732Z" level=info msg="StartContainer for \"878127ec628c1570e94c51f3f3d5f1633d6e9bcb7dee0632d5c88bfd4554b0e5\"" Jul 11 00:14:06.466893 systemd[1]: Started cri-containerd-878127ec628c1570e94c51f3f3d5f1633d6e9bcb7dee0632d5c88bfd4554b0e5.scope - libcontainer container 878127ec628c1570e94c51f3f3d5f1633d6e9bcb7dee0632d5c88bfd4554b0e5. Jul 11 00:14:06.507311 containerd[1532]: time="2025-07-11T00:14:06.507163164Z" level=info msg="StartContainer for \"878127ec628c1570e94c51f3f3d5f1633d6e9bcb7dee0632d5c88bfd4554b0e5\" returns successfully" Jul 11 00:14:09.025244 containerd[1532]: time="2025-07-11T00:14:09.025203156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:09.059225 containerd[1532]: time="2025-07-11T00:14:09.027282619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 00:14:09.072510 containerd[1532]: time="2025-07-11T00:14:09.072459449Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:09.073730 containerd[1532]: time="2025-07-11T00:14:09.073676223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:09.074322 containerd[1532]: time="2025-07-11T00:14:09.074080794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.679962875s" Jul 11 00:14:09.078065 containerd[1532]: time="2025-07-11T00:14:09.078038004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 00:14:09.078767 containerd[1532]: time="2025-07-11T00:14:09.078752020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:14:09.864802 containerd[1532]: time="2025-07-11T00:14:09.864735595Z" level=info msg="CreateContainer within sandbox \"4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:14:10.190595 containerd[1532]: time="2025-07-11T00:14:10.190479954Z" level=info msg="CreateContainer within sandbox \"4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f17ff9c0c223d3c5c2719f5cc5c93e5e48084262eddb7300ad737e398b4c1271\"" Jul 11 00:14:10.191147 containerd[1532]: time="2025-07-11T00:14:10.191128996Z" level=info msg="StartContainer for \"f17ff9c0c223d3c5c2719f5cc5c93e5e48084262eddb7300ad737e398b4c1271\"" Jul 11 00:14:10.346107 systemd[1]: Started cri-containerd-f17ff9c0c223d3c5c2719f5cc5c93e5e48084262eddb7300ad737e398b4c1271.scope - libcontainer container f17ff9c0c223d3c5c2719f5cc5c93e5e48084262eddb7300ad737e398b4c1271. Jul 11 00:14:10.379164 containerd[1532]: time="2025-07-11T00:14:10.379138464Z" level=info msg="StartContainer for \"f17ff9c0c223d3c5c2719f5cc5c93e5e48084262eddb7300ad737e398b4c1271\" returns successfully" Jul 11 00:14:10.815117 containerd[1532]: time="2025-07-11T00:14:10.815076682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:10.815617 containerd[1532]: time="2025-07-11T00:14:10.815485779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 00:14:10.817534 containerd[1532]: time="2025-07-11T00:14:10.815790076Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:10.817534 containerd[1532]: time="2025-07-11T00:14:10.816974849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:10.817534 containerd[1532]: time="2025-07-11T00:14:10.817403389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.738551576s" Jul 11 00:14:10.817534 containerd[1532]: time="2025-07-11T00:14:10.817421946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 00:14:10.850175 containerd[1532]: time="2025-07-11T00:14:10.850148146Z" level=info msg="CreateContainer within sandbox \"49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:14:10.857164 containerd[1532]: time="2025-07-11T00:14:10.857142222Z" level=info msg="CreateContainer within sandbox \"49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c8b3878991338352211079fc9ba1f353c7671a63add6df274c8b38fd06557eb0\"" Jul 11 00:14:10.857776 containerd[1532]: time="2025-07-11T00:14:10.857737999Z" level=info msg="StartContainer for \"c8b3878991338352211079fc9ba1f353c7671a63add6df274c8b38fd06557eb0\"" Jul 11 00:14:10.889121 systemd[1]: Started cri-containerd-c8b3878991338352211079fc9ba1f353c7671a63add6df274c8b38fd06557eb0.scope - libcontainer container c8b3878991338352211079fc9ba1f353c7671a63add6df274c8b38fd06557eb0. Jul 11 00:14:10.915725 containerd[1532]: time="2025-07-11T00:14:10.911596588Z" level=info msg="StartContainer for \"c8b3878991338352211079fc9ba1f353c7671a63add6df274c8b38fd06557eb0\" returns successfully" Jul 11 00:14:11.531398 kubelet[2724]: I0711 00:14:11.521903 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b7988bb64-5mrdt" podStartSLOduration=28.417599524 podStartE2EDuration="42.511868784s" podCreationTimestamp="2025-07-11 00:13:29 +0000 UTC" firstStartedPulling="2025-07-11 00:13:54.987746471 +0000 UTC m=+41.785159350" lastFinishedPulling="2025-07-11 00:14:09.082015725 +0000 UTC m=+55.879428610" observedRunningTime="2025-07-11 00:14:11.487963997 +0000 UTC m=+58.285376878" watchObservedRunningTime="2025-07-11 00:14:11.511868784 +0000 UTC m=+58.309281665" Jul 11 00:14:11.543407 kubelet[2724]: I0711 00:14:11.543368 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-p695n" podStartSLOduration=26.605206252 podStartE2EDuration="42.543343314s" podCreationTimestamp="2025-07-11 00:13:29 +0000 UTC" firstStartedPulling="2025-07-11 00:13:54.88173989 +0000 UTC m=+41.679152772" lastFinishedPulling="2025-07-11 00:14:10.81987696 +0000 UTC m=+57.617289834" observedRunningTime="2025-07-11 00:14:11.531688357 +0000 UTC m=+58.329101235" watchObservedRunningTime="2025-07-11 00:14:11.543343314 +0000 UTC m=+58.340756194" Jul 11 00:14:11.791453 kubelet[2724]: I0711 00:14:11.789756 2724 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:14:11.795546 kubelet[2724]: I0711 00:14:11.795528 2724 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:14:13.438204 containerd[1532]: time="2025-07-11T00:14:13.438094599Z" level=info msg="StopPodSandbox for \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\"" Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.041 [WARNING][5506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m6474-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c569219a-53af-4571-883f-9b7bfe060437", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af", Pod:"coredns-668d6bf9bc-m6474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib70b5dec66a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.045 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.045 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" iface="eth0" netns="" Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.045 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.045 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.329 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" HandleID="k8s-pod-network.d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.333 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.334 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.343 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" HandleID="k8s-pod-network.d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.344 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" HandleID="k8s-pod-network.d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.354 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.356819 containerd[1532]: 2025-07-11 00:14:14.355 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:14:14.381742 containerd[1532]: time="2025-07-11T00:14:14.356851348Z" level=info msg="TearDown network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\" successfully" Jul 11 00:14:14.381742 containerd[1532]: time="2025-07-11T00:14:14.356874544Z" level=info msg="StopPodSandbox for \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\" returns successfully" Jul 11 00:14:14.443583 containerd[1532]: time="2025-07-11T00:14:14.443545877Z" level=info msg="RemovePodSandbox for \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\"" Jul 11 00:14:14.445373 containerd[1532]: time="2025-07-11T00:14:14.445348965Z" level=info msg="Forcibly stopping sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\"" Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.471 [WARNING][5527] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m6474-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c569219a-53af-4571-883f-9b7bfe060437", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc361d48b783a75af49a9876356c2cdec8beb78735cc1ea2d07021f0767005af", Pod:"coredns-668d6bf9bc-m6474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib70b5dec66a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.472 [INFO][5527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.472 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" iface="eth0" netns="" Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.472 [INFO][5527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.472 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.490 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" HandleID="k8s-pod-network.d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.490 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.490 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.498 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" HandleID="k8s-pod-network.d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.498 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" HandleID="k8s-pod-network.d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Workload="localhost-k8s-coredns--668d6bf9bc--m6474-eth0" Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.501 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.505275 containerd[1532]: 2025-07-11 00:14:14.503 [INFO][5527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a" Jul 11 00:14:14.507887 containerd[1532]: time="2025-07-11T00:14:14.505300238Z" level=info msg="TearDown network for sandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\" successfully" Jul 11 00:14:14.522525 containerd[1532]: time="2025-07-11T00:14:14.522057839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:14.531114 containerd[1532]: time="2025-07-11T00:14:14.531079989Z" level=info msg="RemovePodSandbox \"d0ed437dd0c861b5e87957eff6438ef106eb797d6ec6e22035bf52088f08480a\" returns successfully" Jul 11 00:14:14.537366 containerd[1532]: time="2025-07-11T00:14:14.537157435Z" level=info msg="StopPodSandbox for \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\"" Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.566 [WARNING][5548] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f974f17-07d9-43c5-843d-8f77256391bc", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509", Pod:"coredns-668d6bf9bc-sx2g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib660e69b0a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.567 [INFO][5548] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.567 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" iface="eth0" netns="" Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.567 [INFO][5548] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.567 [INFO][5548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.584 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" HandleID="k8s-pod-network.69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.584 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.584 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.588 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" HandleID="k8s-pod-network.69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.588 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" HandleID="k8s-pod-network.69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.589 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.591781 containerd[1532]: 2025-07-11 00:14:14.590 [INFO][5548] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:14:14.593807 containerd[1532]: time="2025-07-11T00:14:14.592058501Z" level=info msg="TearDown network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\" successfully" Jul 11 00:14:14.593807 containerd[1532]: time="2025-07-11T00:14:14.592075271Z" level=info msg="StopPodSandbox for \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\" returns successfully" Jul 11 00:14:14.593807 containerd[1532]: time="2025-07-11T00:14:14.592459647Z" level=info msg="RemovePodSandbox for \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\"" Jul 11 00:14:14.593807 containerd[1532]: time="2025-07-11T00:14:14.592475618Z" level=info msg="Forcibly stopping sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\"" Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.621 [WARNING][5569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f974f17-07d9-43c5-843d-8f77256391bc", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe2e1ea97d893cf8ebcb812f2b9a13d0f7e70f34e7201a4ee21a4354e62c5509", Pod:"coredns-668d6bf9bc-sx2g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib660e69b0a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.621 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.621 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" iface="eth0" netns="" Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.621 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.621 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.638 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" HandleID="k8s-pod-network.69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.638 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.638 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.642 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" HandleID="k8s-pod-network.69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.642 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" HandleID="k8s-pod-network.69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Workload="localhost-k8s-coredns--668d6bf9bc--sx2g9-eth0" Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.643 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.646232 containerd[1532]: 2025-07-11 00:14:14.645 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9" Jul 11 00:14:14.647914 containerd[1532]: time="2025-07-11T00:14:14.646509764Z" level=info msg="TearDown network for sandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\" successfully" Jul 11 00:14:14.667853 containerd[1532]: time="2025-07-11T00:14:14.667736507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:14.667853 containerd[1532]: time="2025-07-11T00:14:14.667785713Z" level=info msg="RemovePodSandbox \"69e9110948f6211e84d244eeb40a7fd632aa3db12687d73a47317a3dde2bb3d9\" returns successfully" Jul 11 00:14:14.675845 containerd[1532]: time="2025-07-11T00:14:14.675820565Z" level=info msg="StopPodSandbox for \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\"" Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.701 [WARNING][5590] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0", GenerateName:"calico-apiserver-dc67c5569-", Namespace:"calico-apiserver", SelfLink:"", UID:"f28665de-a757-4ccc-8a19-96a88f8187af", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc67c5569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3", Pod:"calico-apiserver-dc67c5569-crf45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2522ea36ebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.701 [INFO][5590] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.701 [INFO][5590] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" iface="eth0" netns="" Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.701 [INFO][5590] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.701 [INFO][5590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.715 [INFO][5597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" HandleID="k8s-pod-network.e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.715 [INFO][5597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.716 [INFO][5597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.720 [WARNING][5597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" HandleID="k8s-pod-network.e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.720 [INFO][5597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" HandleID="k8s-pod-network.e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.723 [INFO][5597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.725912 containerd[1532]: 2025-07-11 00:14:14.724 [INFO][5590] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:14:14.725912 containerd[1532]: time="2025-07-11T00:14:14.725879721Z" level=info msg="TearDown network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\" successfully" Jul 11 00:14:14.725912 containerd[1532]: time="2025-07-11T00:14:14.725893575Z" level=info msg="StopPodSandbox for \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\" returns successfully" Jul 11 00:14:14.728141 containerd[1532]: time="2025-07-11T00:14:14.726433613Z" level=info msg="RemovePodSandbox for \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\"" Jul 11 00:14:14.728141 containerd[1532]: time="2025-07-11T00:14:14.726448453Z" level=info msg="Forcibly stopping sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\"" Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.759 [WARNING][5611] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0", GenerateName:"calico-apiserver-dc67c5569-", Namespace:"calico-apiserver", SelfLink:"", UID:"f28665de-a757-4ccc-8a19-96a88f8187af", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc67c5569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eef2f697aaf23db7cf5655431bc90aa044a5ddb6c2f48d0d6f2348d03e6cc5b3", Pod:"calico-apiserver-dc67c5569-crf45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2522ea36ebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.759 [INFO][5611] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.759 [INFO][5611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" iface="eth0" netns="" Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.759 [INFO][5611] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.759 [INFO][5611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.777 [INFO][5619] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" HandleID="k8s-pod-network.e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.777 [INFO][5619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.777 [INFO][5619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.781 [WARNING][5619] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" HandleID="k8s-pod-network.e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.781 [INFO][5619] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" HandleID="k8s-pod-network.e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Workload="localhost-k8s-calico--apiserver--dc67c5569--crf45-eth0" Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.782 [INFO][5619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.784473 containerd[1532]: 2025-07-11 00:14:14.783 [INFO][5611] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7" Jul 11 00:14:14.784763 containerd[1532]: time="2025-07-11T00:14:14.784495944Z" level=info msg="TearDown network for sandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\" successfully" Jul 11 00:14:14.787089 containerd[1532]: time="2025-07-11T00:14:14.786640234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:14.787235 containerd[1532]: time="2025-07-11T00:14:14.787164474Z" level=info msg="RemovePodSandbox \"e645550908924781556682590e4d6f29f600f8b5d5f9b1ff0ecf11c36d5e54d7\" returns successfully" Jul 11 00:14:14.795049 containerd[1532]: time="2025-07-11T00:14:14.794858749Z" level=info msg="StopPodSandbox for \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\"" Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.817 [WARNING][5633] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0", GenerateName:"calico-kube-controllers-b7988bb64-", Namespace:"calico-system", SelfLink:"", UID:"7e0575ec-5d0d-46a3-9f3a-19d2440f8d60", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7988bb64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9", Pod:"calico-kube-controllers-b7988bb64-5mrdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88c4251ac9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.817 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.817 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" iface="eth0" netns="" Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.817 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.817 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.838 [INFO][5640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" HandleID="k8s-pod-network.ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.838 [INFO][5640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.838 [INFO][5640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.844 [WARNING][5640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" HandleID="k8s-pod-network.ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.844 [INFO][5640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" HandleID="k8s-pod-network.ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.845 [INFO][5640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.847901 containerd[1532]: 2025-07-11 00:14:14.846 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:14:14.848326 containerd[1532]: time="2025-07-11T00:14:14.847982395Z" level=info msg="TearDown network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\" successfully" Jul 11 00:14:14.848326 containerd[1532]: time="2025-07-11T00:14:14.848001308Z" level=info msg="StopPodSandbox for \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\" returns successfully" Jul 11 00:14:14.849574 containerd[1532]: time="2025-07-11T00:14:14.848686342Z" level=info msg="RemovePodSandbox for \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\"" Jul 11 00:14:14.849574 containerd[1532]: time="2025-07-11T00:14:14.848704874Z" level=info msg="Forcibly stopping sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\"" Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.869 [WARNING][5654] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0", GenerateName:"calico-kube-controllers-b7988bb64-", Namespace:"calico-system", SelfLink:"", UID:"7e0575ec-5d0d-46a3-9f3a-19d2440f8d60", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7988bb64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4103efc7183a0af7b57841ce08404451da7d0a237bc38b9f2ca41123c18e5db9", Pod:"calico-kube-controllers-b7988bb64-5mrdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88c4251ac9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.869 [INFO][5654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.869 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" iface="eth0" netns="" Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.869 [INFO][5654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.869 [INFO][5654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.881 [INFO][5661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" HandleID="k8s-pod-network.ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.882 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.882 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.885 [WARNING][5661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" HandleID="k8s-pod-network.ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.885 [INFO][5661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" HandleID="k8s-pod-network.ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Workload="localhost-k8s-calico--kube--controllers--b7988bb64--5mrdt-eth0" Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.886 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.888233 containerd[1532]: 2025-07-11 00:14:14.887 [INFO][5654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325" Jul 11 00:14:14.890136 containerd[1532]: time="2025-07-11T00:14:14.888310632Z" level=info msg="TearDown network for sandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\" successfully" Jul 11 00:14:14.891123 containerd[1532]: time="2025-07-11T00:14:14.890844595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:14.891123 containerd[1532]: time="2025-07-11T00:14:14.890879875Z" level=info msg="RemovePodSandbox \"ffe8bca6b3519ae14a3baf36ccb47ec825c5311ff9cdaf8c272287de2147e325\" returns successfully" Jul 11 00:14:14.891397 containerd[1532]: time="2025-07-11T00:14:14.891255647Z" level=info msg="StopPodSandbox for \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\"" Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.915 [WARNING][5675] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0", GenerateName:"calico-apiserver-dc67c5569-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cae885b-c99b-4b29-a6a7-210ea001e884", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc67c5569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404", Pod:"calico-apiserver-dc67c5569-2wqmp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2afa3b17ce5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.915 [INFO][5675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.915 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" iface="eth0" netns="" Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.915 [INFO][5675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.915 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.928 [INFO][5682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" HandleID="k8s-pod-network.63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.929 [INFO][5682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.929 [INFO][5682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.932 [WARNING][5682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" HandleID="k8s-pod-network.63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.932 [INFO][5682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" HandleID="k8s-pod-network.63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.934 [INFO][5682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.936690 containerd[1532]: 2025-07-11 00:14:14.935 [INFO][5675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:14:14.936690 containerd[1532]: time="2025-07-11T00:14:14.936679922Z" level=info msg="TearDown network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\" successfully" Jul 11 00:14:14.938840 containerd[1532]: time="2025-07-11T00:14:14.936696618Z" level=info msg="StopPodSandbox for \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\" returns successfully" Jul 11 00:14:14.938840 containerd[1532]: time="2025-07-11T00:14:14.937666971Z" level=info msg="RemovePodSandbox for \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\"" Jul 11 00:14:14.938840 containerd[1532]: time="2025-07-11T00:14:14.937684271Z" level=info msg="Forcibly stopping sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\"" Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.958 [WARNING][5696] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0", GenerateName:"calico-apiserver-dc67c5569-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cae885b-c99b-4b29-a6a7-210ea001e884", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc67c5569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60a621243b28e7d51e4292d0a14f05a52f6eedc187de2635438816e5181d3404", Pod:"calico-apiserver-dc67c5569-2wqmp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2afa3b17ce5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.958 [INFO][5696] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.958 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" iface="eth0" netns="" Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.958 [INFO][5696] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.958 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.972 [INFO][5703] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" HandleID="k8s-pod-network.63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.972 [INFO][5703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.972 [INFO][5703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.976 [WARNING][5703] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" HandleID="k8s-pod-network.63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.976 [INFO][5703] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" HandleID="k8s-pod-network.63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Workload="localhost-k8s-calico--apiserver--dc67c5569--2wqmp-eth0" Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.976 [INFO][5703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:14.979043 containerd[1532]: 2025-07-11 00:14:14.977 [INFO][5696] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff" Jul 11 00:14:14.980395 containerd[1532]: time="2025-07-11T00:14:14.979065811Z" level=info msg="TearDown network for sandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\" successfully" Jul 11 00:14:14.981521 containerd[1532]: time="2025-07-11T00:14:14.981503340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:14.981573 containerd[1532]: time="2025-07-11T00:14:14.981544494Z" level=info msg="RemovePodSandbox \"63350b31d8141f2d63681b9df12eba1ebd908c7610cc90ce8cf21df22cc1a0ff\" returns successfully" Jul 11 00:14:14.981823 containerd[1532]: time="2025-07-11T00:14:14.981807786Z" level=info msg="StopPodSandbox for \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\"" Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.007 [WARNING][5717] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" WorkloadEndpoint="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.007 [INFO][5717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.007 [INFO][5717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" iface="eth0" netns="" Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.007 [INFO][5717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.007 [INFO][5717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.026 [INFO][5726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" HandleID="k8s-pod-network.0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Workload="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.026 [INFO][5726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.026 [INFO][5726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.031 [WARNING][5726] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" HandleID="k8s-pod-network.0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Workload="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.031 [INFO][5726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" HandleID="k8s-pod-network.0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Workload="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.033 [INFO][5726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:15.037791 containerd[1532]: 2025-07-11 00:14:15.035 [INFO][5717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:14:15.038198 containerd[1532]: time="2025-07-11T00:14:15.037823993Z" level=info msg="TearDown network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\" successfully" Jul 11 00:14:15.038198 containerd[1532]: time="2025-07-11T00:14:15.037840636Z" level=info msg="StopPodSandbox for \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\" returns successfully" Jul 11 00:14:15.038198 containerd[1532]: time="2025-07-11T00:14:15.038166362Z" level=info msg="RemovePodSandbox for \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\"" Jul 11 00:14:15.038198 containerd[1532]: time="2025-07-11T00:14:15.038183631Z" level=info msg="Forcibly stopping sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\"" Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.061 [WARNING][5742] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" WorkloadEndpoint="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.061 [INFO][5742] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.061 [INFO][5742] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" iface="eth0" netns="" Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.061 [INFO][5742] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.061 [INFO][5742] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.076 [INFO][5749] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" HandleID="k8s-pod-network.0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Workload="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.077 [INFO][5749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.077 [INFO][5749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.081 [WARNING][5749] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" HandleID="k8s-pod-network.0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Workload="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.081 [INFO][5749] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" HandleID="k8s-pod-network.0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Workload="localhost-k8s-whisker--647954949f--2pjvw-eth0" Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.082 [INFO][5749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:15.085058 containerd[1532]: 2025-07-11 00:14:15.083 [INFO][5742] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf" Jul 11 00:14:15.085058 containerd[1532]: time="2025-07-11T00:14:15.084571897Z" level=info msg="TearDown network for sandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\" successfully" Jul 11 00:14:15.086341 containerd[1532]: time="2025-07-11T00:14:15.086324648Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:15.086400 containerd[1532]: time="2025-07-11T00:14:15.086359005Z" level=info msg="RemovePodSandbox \"0289d14af8edcf860bb017b8856f4650ef7bed1d8076c34fdc9d90808b74d2cf\" returns successfully" Jul 11 00:14:15.086705 containerd[1532]: time="2025-07-11T00:14:15.086686836Z" level=info msg="StopPodSandbox for \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\"" Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.107 [WARNING][5763] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"65805cee-bfb6-4749-bbfc-8e9405f90c70", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af", Pod:"goldmane-768f4c5c69-9c6d6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1b6715502f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.107 [INFO][5763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.107 [INFO][5763] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" iface="eth0" netns="" Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.108 [INFO][5763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.108 [INFO][5763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.122 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" HandleID="k8s-pod-network.85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.122 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.122 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.126 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" HandleID="k8s-pod-network.85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.126 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" HandleID="k8s-pod-network.85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.127 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:15.129626 containerd[1532]: 2025-07-11 00:14:15.128 [INFO][5763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:14:15.129626 containerd[1532]: time="2025-07-11T00:14:15.129586037Z" level=info msg="TearDown network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\" successfully" Jul 11 00:14:15.129626 containerd[1532]: time="2025-07-11T00:14:15.129602029Z" level=info msg="StopPodSandbox for \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\" returns successfully" Jul 11 00:14:15.130575 containerd[1532]: time="2025-07-11T00:14:15.130286088Z" level=info msg="RemovePodSandbox for \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\"" Jul 11 00:14:15.130575 containerd[1532]: time="2025-07-11T00:14:15.130302232Z" level=info msg="Forcibly stopping sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\"" Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.152 [WARNING][5784] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"65805cee-bfb6-4749-bbfc-8e9405f90c70", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8bbfc618ef0bb473f4ace34bb8be14c5e46763ea7a137eb3e2eee6cfe9296af", Pod:"goldmane-768f4c5c69-9c6d6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1b6715502f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.152 [INFO][5784] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.152 [INFO][5784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" iface="eth0" netns="" Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.152 [INFO][5784] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.152 [INFO][5784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.166 [INFO][5791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" HandleID="k8s-pod-network.85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.166 [INFO][5791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.166 [INFO][5791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.171 [WARNING][5791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" HandleID="k8s-pod-network.85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.171 [INFO][5791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" HandleID="k8s-pod-network.85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Workload="localhost-k8s-goldmane--768f4c5c69--9c6d6-eth0" Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.172 [INFO][5791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:15.174998 containerd[1532]: 2025-07-11 00:14:15.173 [INFO][5784] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f" Jul 11 00:14:15.175457 containerd[1532]: time="2025-07-11T00:14:15.175043488Z" level=info msg="TearDown network for sandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\" successfully" Jul 11 00:14:15.177040 containerd[1532]: time="2025-07-11T00:14:15.176986808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:15.177096 containerd[1532]: time="2025-07-11T00:14:15.177071250Z" level=info msg="RemovePodSandbox \"85d63562f472e99d465275d0336cec1641b3e21967a45808d33ef932163b110f\" returns successfully" Jul 11 00:14:15.177646 containerd[1532]: time="2025-07-11T00:14:15.177468539Z" level=info msg="StopPodSandbox for \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\"" Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.203 [WARNING][5805] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p695n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a987efe9-25c5-4a4f-8880-f0e8c56f315d", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507", Pod:"csi-node-driver-p695n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali383893d59b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.203 [INFO][5805] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.203 [INFO][5805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" iface="eth0" netns="" Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.203 [INFO][5805] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.204 [INFO][5805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.235 [INFO][5812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" HandleID="k8s-pod-network.3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.235 [INFO][5812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.235 [INFO][5812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.240 [WARNING][5812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" HandleID="k8s-pod-network.3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.240 [INFO][5812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" HandleID="k8s-pod-network.3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.241 [INFO][5812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:15.243524 containerd[1532]: 2025-07-11 00:14:15.242 [INFO][5805] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:14:15.243932 containerd[1532]: time="2025-07-11T00:14:15.243857395Z" level=info msg="TearDown network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\" successfully" Jul 11 00:14:15.243932 containerd[1532]: time="2025-07-11T00:14:15.243878096Z" level=info msg="StopPodSandbox for \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\" returns successfully" Jul 11 00:14:15.249436 containerd[1532]: time="2025-07-11T00:14:15.249369211Z" level=info msg="RemovePodSandbox for \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\"" Jul 11 00:14:15.249436 containerd[1532]: time="2025-07-11T00:14:15.249387707Z" level=info msg="Forcibly stopping sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\"" Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.378 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p695n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a987efe9-25c5-4a4f-8880-f0e8c56f315d", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49ca4057090013f0428f5d2d40c2606bea60bea9d9e4e3e62f4cb4b26f3c8507", Pod:"csi-node-driver-p695n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali383893d59b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.378 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.378 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" iface="eth0" netns="" Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.378 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.378 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.396 [INFO][5833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" HandleID="k8s-pod-network.3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.396 [INFO][5833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.396 [INFO][5833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.401 [WARNING][5833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" HandleID="k8s-pod-network.3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.401 [INFO][5833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" HandleID="k8s-pod-network.3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Workload="localhost-k8s-csi--node--driver--p695n-eth0" Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.406 [INFO][5833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:15.408661 containerd[1532]: 2025-07-11 00:14:15.407 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e" Jul 11 00:14:15.413997 containerd[1532]: time="2025-07-11T00:14:15.408638522Z" level=info msg="TearDown network for sandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\" successfully" Jul 11 00:14:15.423687 containerd[1532]: time="2025-07-11T00:14:15.423672371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:15.423763 containerd[1532]: time="2025-07-11T00:14:15.423753483Z" level=info msg="RemovePodSandbox \"3631bb335a8863d75b69513745c0f935cb3f7f93003f67c7f0a13091f975789e\" returns successfully" Jul 11 00:14:19.946746 kubelet[2724]: I0711 00:14:19.942618 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:14:23.032473 systemd[1]: Started sshd@7-139.178.70.105:22-139.178.68.195:46030.service - OpenSSH per-connection server daemon (139.178.68.195:46030). Jul 11 00:14:23.132995 sshd[5909]: Accepted publickey for core from 139.178.68.195 port 46030 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:23.136278 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:23.148959 systemd-logind[1514]: New session 10 of user core. Jul 11 00:14:23.153150 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:14:23.836864 sshd[5909]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:23.848726 systemd[1]: sshd@7-139.178.70.105:22-139.178.68.195:46030.service: Deactivated successfully. Jul 11 00:14:23.850558 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:14:23.853204 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:14:23.855201 systemd-logind[1514]: Removed session 10. Jul 11 00:14:28.848503 systemd[1]: Started sshd@8-139.178.70.105:22-139.178.68.195:40860.service - OpenSSH per-connection server daemon (139.178.68.195:40860). Jul 11 00:14:28.957865 sshd[5961]: Accepted publickey for core from 139.178.68.195 port 40860 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:28.959943 sshd[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:28.963617 systemd-logind[1514]: New session 11 of user core. Jul 11 00:14:28.967217 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:14:29.364283 sshd[5961]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:29.367581 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:14:29.368709 systemd[1]: sshd@8-139.178.70.105:22-139.178.68.195:40860.service: Deactivated successfully. Jul 11 00:14:29.371838 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:14:29.373279 systemd-logind[1514]: Removed session 11. Jul 11 00:14:34.378222 systemd[1]: Started sshd@9-139.178.70.105:22-139.178.68.195:40866.service - OpenSSH per-connection server daemon (139.178.68.195:40866). Jul 11 00:14:34.469956 sshd[5975]: Accepted publickey for core from 139.178.68.195 port 40866 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:34.472324 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:34.478396 systemd-logind[1514]: New session 12 of user core. Jul 11 00:14:34.483203 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:14:34.860201 sshd[5975]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:34.866974 systemd[1]: sshd@9-139.178.70.105:22-139.178.68.195:40866.service: Deactivated successfully. Jul 11 00:14:34.868254 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:14:34.870657 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:14:34.877304 systemd[1]: Started sshd@10-139.178.70.105:22-139.178.68.195:40874.service - OpenSSH per-connection server daemon (139.178.68.195:40874). Jul 11 00:14:34.878505 systemd-logind[1514]: Removed session 12. Jul 11 00:14:34.935851 sshd[5989]: Accepted publickey for core from 139.178.68.195 port 40874 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:34.936971 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:34.941985 systemd-logind[1514]: New session 13 of user core. Jul 11 00:14:34.945092 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:14:35.130984 sshd[5989]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:35.137380 systemd[1]: Started sshd@11-139.178.70.105:22-139.178.68.195:40880.service - OpenSSH per-connection server daemon (139.178.68.195:40880). Jul 11 00:14:35.141527 systemd[1]: sshd@10-139.178.70.105:22-139.178.68.195:40874.service: Deactivated successfully. Jul 11 00:14:35.143775 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:14:35.145102 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:14:35.145678 systemd-logind[1514]: Removed session 13. Jul 11 00:14:35.239941 sshd[6000]: Accepted publickey for core from 139.178.68.195 port 40880 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:35.241475 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:35.247483 systemd-logind[1514]: New session 14 of user core. Jul 11 00:14:35.253382 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:14:35.399586 sshd[6000]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:35.406551 systemd[1]: sshd@11-139.178.70.105:22-139.178.68.195:40880.service: Deactivated successfully. Jul 11 00:14:35.407642 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:14:35.408066 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:14:35.408733 systemd-logind[1514]: Removed session 14. Jul 11 00:14:36.586045 systemd[1]: run-containerd-runc-k8s.io-f726df4dc4ccc1b8ff8d9b77f42f19330459c38b4659cd55885109c889bf1ec7-runc.RSxW6M.mount: Deactivated successfully. Jul 11 00:14:40.469116 systemd[1]: Started sshd@12-139.178.70.105:22-139.178.68.195:53536.service - OpenSSH per-connection server daemon (139.178.68.195:53536). Jul 11 00:14:40.582065 sshd[6069]: Accepted publickey for core from 139.178.68.195 port 53536 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:40.583735 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:40.587107 systemd-logind[1514]: New session 15 of user core. Jul 11 00:14:40.595114 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:14:41.162150 sshd[6069]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:41.166220 systemd[1]: sshd@12-139.178.70.105:22-139.178.68.195:53536.service: Deactivated successfully. Jul 11 00:14:41.167300 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:14:41.170121 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:14:41.170739 systemd-logind[1514]: Removed session 15. Jul 11 00:14:46.170174 systemd[1]: Started sshd@13-139.178.70.105:22-139.178.68.195:53542.service - OpenSSH per-connection server daemon (139.178.68.195:53542). Jul 11 00:14:46.564601 sshd[6103]: Accepted publickey for core from 139.178.68.195 port 53542 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:46.603814 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:46.618945 systemd-logind[1514]: New session 16 of user core. Jul 11 00:14:46.624186 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:14:47.416852 sshd[6103]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:47.425910 systemd[1]: sshd@13-139.178.70.105:22-139.178.68.195:53542.service: Deactivated successfully. Jul 11 00:14:47.427468 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:14:47.429982 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:14:47.444373 systemd-logind[1514]: Removed session 16. Jul 11 00:14:52.630468 systemd[1]: Started sshd@14-139.178.70.105:22-139.178.68.195:43606.service - OpenSSH per-connection server daemon (139.178.68.195:43606). Jul 11 00:14:52.826891 sshd[6143]: Accepted publickey for core from 139.178.68.195 port 43606 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:52.850692 sshd[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:52.865815 systemd-logind[1514]: New session 17 of user core. Jul 11 00:14:52.875156 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:14:53.476761 sshd[6143]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:53.479833 systemd[1]: sshd@14-139.178.70.105:22-139.178.68.195:43606.service: Deactivated successfully. Jul 11 00:14:53.481534 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:14:53.491581 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:14:53.492368 systemd-logind[1514]: Removed session 17. Jul 11 00:14:58.487440 systemd[1]: Started sshd@15-139.178.70.105:22-139.178.68.195:35436.service - OpenSSH per-connection server daemon (139.178.68.195:35436). Jul 11 00:14:58.626332 sshd[6156]: Accepted publickey for core from 139.178.68.195 port 35436 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:58.627565 sshd[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:58.633036 systemd-logind[1514]: New session 18 of user core. Jul 11 00:14:58.636447 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:14:59.560705 sshd[6156]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:59.566802 systemd[1]: Started sshd@16-139.178.70.105:22-139.178.68.195:35442.service - OpenSSH per-connection server daemon (139.178.68.195:35442). Jul 11 00:14:59.573452 systemd[1]: sshd@15-139.178.70.105:22-139.178.68.195:35436.service: Deactivated successfully. Jul 11 00:14:59.574075 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:14:59.577643 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:14:59.579515 systemd-logind[1514]: Removed session 18. Jul 11 00:14:59.796211 sshd[6167]: Accepted publickey for core from 139.178.68.195 port 35442 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:14:59.804938 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:59.810299 systemd-logind[1514]: New session 19 of user core. Jul 11 00:14:59.815133 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:15:00.712916 sshd[6167]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:00.724270 systemd[1]: sshd@16-139.178.70.105:22-139.178.68.195:35442.service: Deactivated successfully. Jul 11 00:15:00.725399 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:15:00.727106 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:15:00.731409 systemd[1]: Started sshd@17-139.178.70.105:22-139.178.68.195:35448.service - OpenSSH per-connection server daemon (139.178.68.195:35448). Jul 11 00:15:00.732364 systemd-logind[1514]: Removed session 19. Jul 11 00:15:00.830066 sshd[6180]: Accepted publickey for core from 139.178.68.195 port 35448 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:15:00.831339 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:00.834495 systemd-logind[1514]: New session 20 of user core. Jul 11 00:15:00.841101 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:15:02.194598 sshd[6180]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:02.223802 systemd[1]: sshd@17-139.178.70.105:22-139.178.68.195:35448.service: Deactivated successfully. Jul 11 00:15:02.226464 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:15:02.227045 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:15:02.241383 systemd[1]: Started sshd@18-139.178.70.105:22-139.178.68.195:35452.service - OpenSSH per-connection server daemon (139.178.68.195:35452). Jul 11 00:15:02.242995 systemd-logind[1514]: Removed session 20. Jul 11 00:15:02.505474 sshd[6197]: Accepted publickey for core from 139.178.68.195 port 35452 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:15:02.511440 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:02.517602 systemd-logind[1514]: New session 21 of user core. Jul 11 00:15:02.525216 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:15:04.973178 kubelet[2724]: E0711 00:15:04.871107 2724 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.353s" Jul 11 00:15:06.403995 sshd[6197]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:06.469227 systemd[1]: Started sshd@19-139.178.70.105:22-139.178.68.195:35454.service - OpenSSH per-connection server daemon (139.178.68.195:35454). Jul 11 00:15:06.469595 systemd[1]: sshd@18-139.178.70.105:22-139.178.68.195:35452.service: Deactivated successfully. Jul 11 00:15:06.471691 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:15:06.473218 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:15:06.475143 systemd-logind[1514]: Removed session 21. Jul 11 00:15:07.287387 sshd[6210]: Accepted publickey for core from 139.178.68.195 port 35454 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:15:07.302642 sshd[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:07.319983 systemd-logind[1514]: New session 22 of user core. Jul 11 00:15:07.323118 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:15:10.592629 sshd[6210]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:10.663139 systemd[1]: sshd@19-139.178.70.105:22-139.178.68.195:35454.service: Deactivated successfully. Jul 11 00:15:10.664849 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:15:10.666225 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:15:10.670597 systemd-logind[1514]: Removed session 22. Jul 11 00:15:15.661238 systemd[1]: Started sshd@20-139.178.70.105:22-139.178.68.195:52692.service - OpenSSH per-connection server daemon (139.178.68.195:52692). Jul 11 00:15:15.849372 sshd[6281]: Accepted publickey for core from 139.178.68.195 port 52692 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:15:15.867204 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:15.873257 systemd-logind[1514]: New session 23 of user core. Jul 11 00:15:15.879138 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:15:18.083327 sshd[6281]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:18.184545 systemd[1]: sshd@20-139.178.70.105:22-139.178.68.195:52692.service: Deactivated successfully. Jul 11 00:15:18.186070 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:15:18.187417 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:15:18.188357 systemd-logind[1514]: Removed session 23. Jul 11 00:15:23.307247 systemd[1]: Started sshd@21-139.178.70.105:22-139.178.68.195:48252.service - OpenSSH per-connection server daemon (139.178.68.195:48252). Jul 11 00:15:23.620252 sshd[6371]: Accepted publickey for core from 139.178.68.195 port 48252 ssh2: RSA SHA256:qznOWapQhaq5ZLJONcMT9WQpHg2LEhVZQ4jktRwI5fg Jul 11 00:15:23.640590 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:23.660674 systemd-logind[1514]: New session 24 of user core. Jul 11 00:15:23.666213 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:15:24.689203 sshd[6371]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:24.691788 systemd-logind[1514]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:15:24.692293 systemd[1]: sshd@21-139.178.70.105:22-139.178.68.195:48252.service: Deactivated successfully. Jul 11 00:15:24.693509 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:15:24.694368 systemd-logind[1514]: Removed session 24.