Sep 12 17:39:22.737130 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:39:22.737146 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:39:22.737153 kernel: Disabled fast string operations Sep 12 17:39:22.737157 kernel: BIOS-provided physical RAM map: Sep 12 17:39:22.737161 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 12 17:39:22.737165 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 12 17:39:22.737171 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 12 17:39:22.737175 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 12 17:39:22.737179 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 12 17:39:22.737183 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 12 17:39:22.737187 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 12 17:39:22.737192 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 12 17:39:22.737196 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 12 17:39:22.737200 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 12 17:39:22.737206 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 12 17:39:22.737211 kernel: NX (Execute Disable) protection: active Sep 12 17:39:22.737216 kernel: APIC: Static calls initialized Sep 12 17:39:22.737220 kernel: SMBIOS 2.7 present. Sep 12 17:39:22.737225 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 12 17:39:22.737230 kernel: vmware: hypercall mode: 0x00 Sep 12 17:39:22.737235 kernel: Hypervisor detected: VMware Sep 12 17:39:22.737240 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 12 17:39:22.737245 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 12 17:39:22.737250 kernel: vmware: using clock offset of 2747830112 ns Sep 12 17:39:22.737255 kernel: tsc: Detected 3408.000 MHz processor Sep 12 17:39:22.737630 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:39:22.737639 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:39:22.737645 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 12 17:39:22.737650 kernel: total RAM covered: 3072M Sep 12 17:39:22.737654 kernel: Found optimal setting for mtrr clean up Sep 12 17:39:22.737660 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 12 17:39:22.737668 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 12 17:39:22.737673 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:39:22.737677 kernel: Using GB pages for direct mapping Sep 12 17:39:22.737683 kernel: ACPI: Early table checksum verification disabled Sep 12 17:39:22.737687 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 12 17:39:22.737692 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 12 17:39:22.737697 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 12 17:39:22.737702 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 12 17:39:22.737707 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 12 17:39:22.737715 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 12 17:39:22.737720 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 12 17:39:22.737725 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 12 17:39:22.737730 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 12 17:39:22.737735 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 12 17:39:22.737741 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 12 17:39:22.737746 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 12 17:39:22.737752 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 12 17:39:22.737757 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 12 17:39:22.737762 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 12 17:39:22.737767 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 12 17:39:22.737772 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 12 17:39:22.737777 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 12 17:39:22.737782 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 12 17:39:22.737787 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 12 17:39:22.737793 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 12 17:39:22.737798 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 12 17:39:22.737803 kernel: system APIC only can use physical flat Sep 12 17:39:22.737808 kernel: APIC: Switched APIC routing to: physical flat Sep 12 17:39:22.737814 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:39:22.737819 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 12 17:39:22.737824 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 12 17:39:22.737829 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 12 17:39:22.737834 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 12 17:39:22.737840 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 12 17:39:22.737845 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 12 17:39:22.737850 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 12 17:39:22.737855 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Sep 12 17:39:22.737860 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Sep 12 17:39:22.737865 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Sep 12 17:39:22.737870 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Sep 12 17:39:22.737875 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Sep 12 17:39:22.737880 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Sep 12 17:39:22.737885 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Sep 12 17:39:22.737891 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Sep 12 17:39:22.737896 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Sep 12 17:39:22.737901 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Sep 12 17:39:22.737906 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Sep 12 17:39:22.737911 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Sep 12 17:39:22.737916 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Sep 12 17:39:22.737920 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Sep 12 17:39:22.737925 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Sep 12 17:39:22.737931 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Sep 12 17:39:22.737936 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Sep 12 17:39:22.737941 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Sep 12 17:39:22.737947 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Sep 12 17:39:22.737952 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Sep 12 17:39:22.737957 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Sep 12 17:39:22.737962 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Sep 12 17:39:22.737967 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Sep 12 17:39:22.737972 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Sep 12 17:39:22.737977 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Sep 12 17:39:22.737982 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Sep 12 17:39:22.737987 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Sep 12 17:39:22.737992 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Sep 12 17:39:22.737998 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Sep 12 17:39:22.738003 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Sep 12 17:39:22.738008 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Sep 12 17:39:22.738013 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Sep 12 17:39:22.738018 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Sep 12 17:39:22.738023 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Sep 12 17:39:22.738028 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Sep 12 17:39:22.738033 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Sep 12 17:39:22.738038 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Sep 12 17:39:22.738043 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Sep 12 17:39:22.738049 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Sep 12 17:39:22.738054 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Sep 12 17:39:22.738059 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Sep 12 17:39:22.738064 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Sep 12 17:39:22.738069 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Sep 12 17:39:22.738074 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Sep 12 17:39:22.738079 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Sep 12 17:39:22.738084 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Sep 12 17:39:22.738089 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Sep 12 17:39:22.738093 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Sep 12 17:39:22.738100 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Sep 12 17:39:22.738104 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Sep 12 17:39:22.738110 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Sep 12 17:39:22.738119 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Sep 12 17:39:22.738124 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Sep 12 17:39:22.738130 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Sep 12 17:39:22.738135 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Sep 12 17:39:22.738140 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Sep 12 17:39:22.738145 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Sep 12 17:39:22.738152 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Sep 12 17:39:22.738157 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Sep 12 17:39:22.738163 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Sep 12 17:39:22.738168 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Sep 12 17:39:22.738173 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Sep 12 17:39:22.738178 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Sep 12 17:39:22.738184 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Sep 12 17:39:22.738189 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Sep 12 17:39:22.738194 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Sep 12 17:39:22.738199 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Sep 12 17:39:22.738206 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Sep 12 17:39:22.738211 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Sep 12 17:39:22.738216 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Sep 12 17:39:22.738222 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Sep 12 17:39:22.738227 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Sep 12 17:39:22.738232 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Sep 12 17:39:22.738238 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Sep 12 17:39:22.738243 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Sep 12 17:39:22.738248 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Sep 12 17:39:22.738253 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Sep 12 17:39:22.738299 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Sep 12 17:39:22.738305 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Sep 12 17:39:22.738310 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Sep 12 17:39:22.738316 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Sep 12 17:39:22.738321 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Sep 12 17:39:22.738326 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Sep 12 17:39:22.738331 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Sep 12 17:39:22.738337 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Sep 12 17:39:22.738342 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Sep 12 17:39:22.738347 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Sep 12 17:39:22.738355 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Sep 12 17:39:22.738360 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Sep 12 17:39:22.738366 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Sep 12 17:39:22.738371 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Sep 12 17:39:22.738376 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Sep 12 17:39:22.738381 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Sep 12 17:39:22.738387 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Sep 12 17:39:22.738392 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Sep 12 17:39:22.738397 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Sep 12 17:39:22.738402 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Sep 12 17:39:22.738408 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Sep 12 17:39:22.738415 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Sep 12 17:39:22.738420 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Sep 12 17:39:22.738425 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Sep 12 17:39:22.738431 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Sep 12 17:39:22.738436 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Sep 12 17:39:22.738441 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Sep 12 17:39:22.738446 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Sep 12 17:39:22.738452 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Sep 12 17:39:22.738457 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Sep 12 17:39:22.738462 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Sep 12 17:39:22.738469 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Sep 12 17:39:22.738474 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Sep 12 17:39:22.738479 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Sep 12 17:39:22.738484 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Sep 12 17:39:22.738490 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Sep 12 17:39:22.738495 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Sep 12 17:39:22.738500 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Sep 12 17:39:22.738505 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Sep 12 17:39:22.738511 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Sep 12 17:39:22.738516 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Sep 12 17:39:22.738523 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Sep 12 17:39:22.738528 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Sep 12 17:39:22.738534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:39:22.738539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 17:39:22.738545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 12 17:39:22.738550 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Sep 12 17:39:22.738556 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Sep 12 17:39:22.738562 kernel: Zone ranges: Sep 12 17:39:22.738567 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:39:22.738574 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 12 17:39:22.738579 kernel: Normal empty Sep 12 17:39:22.738585 kernel: Movable zone start for each node Sep 12 17:39:22.738590 kernel: Early memory node ranges Sep 12 17:39:22.738596 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 12 17:39:22.738601 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 12 17:39:22.738607 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 12 17:39:22.738612 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 12 17:39:22.738617 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:39:22.738623 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 12 17:39:22.738629 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 12 17:39:22.738635 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 12 17:39:22.738640 kernel: system APIC only can use physical flat Sep 12 17:39:22.738646 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 12 17:39:22.738651 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 12 17:39:22.738657 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 12 17:39:22.738662 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 12 17:39:22.738667 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 12 17:39:22.738672 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 12 17:39:22.738679 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 12 17:39:22.738684 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 12 17:39:22.738690 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 12 17:39:22.738695 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 12 17:39:22.738700 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 12 17:39:22.738706 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 12 17:39:22.738711 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 12 17:39:22.738716 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 12 17:39:22.738722 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 12 17:39:22.738727 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 12 17:39:22.738733 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 12 17:39:22.738739 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 12 17:39:22.738744 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 12 17:39:22.738749 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 12 17:39:22.738755 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 12 17:39:22.738760 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 12 17:39:22.738765 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 12 17:39:22.738771 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 12 17:39:22.738776 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 12 17:39:22.738782 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 12 17:39:22.738788 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 12 17:39:22.738793 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 12 17:39:22.738799 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 12 17:39:22.738804 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 12 17:39:22.738809 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 12 17:39:22.738815 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 12 17:39:22.738820 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 12 17:39:22.738825 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 12 17:39:22.738830 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 12 17:39:22.738837 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 12 17:39:22.738842 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 12 17:39:22.738848 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 12 17:39:22.738853 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 12 17:39:22.738858 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 12 17:39:22.738864 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 12 17:39:22.738869 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 12 17:39:22.738874 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 12 17:39:22.738884 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 12 17:39:22.738892 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 12 17:39:22.738899 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 12 17:39:22.738905 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 12 17:39:22.738910 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 12 17:39:22.738915 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 12 17:39:22.738921 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 12 17:39:22.738926 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 12 17:39:22.738932 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 12 17:39:22.738937 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 12 17:39:22.738942 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 12 17:39:22.738948 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 12 17:39:22.738954 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 12 17:39:22.738960 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 12 17:39:22.738965 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 12 17:39:22.738971 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 12 17:39:22.738976 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 12 17:39:22.738981 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 12 17:39:22.738987 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 12 17:39:22.738992 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 12 17:39:22.738997 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 12 17:39:22.739004 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 12 17:39:22.739009 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 12 17:39:22.739014 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 12 17:39:22.739020 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 12 17:39:22.739025 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 12 17:39:22.739030 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 12 17:39:22.739036 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 12 17:39:22.739041 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 12 17:39:22.739047 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 12 17:39:22.739052 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 12 17:39:22.739059 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 12 17:39:22.739064 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 12 17:39:22.739069 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 12 17:39:22.739074 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 12 17:39:22.739080 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 12 17:39:22.739085 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 12 17:39:22.739090 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 12 17:39:22.739095 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 12 17:39:22.739101 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 12 17:39:22.739106 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 12 17:39:22.739112 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 12 17:39:22.739118 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 12 17:39:22.739123 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 12 17:39:22.739128 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 12 17:39:22.739134 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 12 17:39:22.739139 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 12 17:39:22.739144 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 12 17:39:22.739149 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 12 17:39:22.739155 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 12 17:39:22.739161 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 12 17:39:22.739167 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 12 17:39:22.739172 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 12 17:39:22.739177 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 12 17:39:22.739182 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 12 17:39:22.739188 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 12 17:39:22.739193 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 12 17:39:22.739198 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 12 17:39:22.739204 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 12 17:39:22.739209 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 12 17:39:22.739215 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 12 17:39:22.739221 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 12 17:39:22.739226 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 12 17:39:22.739231 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 12 17:39:22.739237 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 12 17:39:22.739242 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 12 17:39:22.739247 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 12 17:39:22.739253 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 12 17:39:22.739264 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 12 17:39:22.739278 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 12 17:39:22.739287 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 12 17:39:22.739292 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 12 17:39:22.739298 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 12 17:39:22.739303 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 12 17:39:22.739308 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 12 17:39:22.739314 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 12 17:39:22.739319 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 12 17:39:22.739324 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 12 17:39:22.739330 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 12 17:39:22.739336 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 12 17:39:22.739342 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 12 17:39:22.739347 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 12 17:39:22.739352 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 12 17:39:22.739358 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 12 17:39:22.739363 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 12 17:39:22.739368 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:39:22.739374 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 12 17:39:22.739379 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:39:22.739385 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 12 17:39:22.739391 kernel: TSC deadline timer available Sep 12 17:39:22.739397 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Sep 12 17:39:22.739402 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 12 17:39:22.739408 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 12 17:39:22.739414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:39:22.739419 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 12 17:39:22.739425 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 12 17:39:22.739430 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 12 17:39:22.739435 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 12 17:39:22.739442 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 12 17:39:22.739448 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 12 17:39:22.739453 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 12 17:39:22.739458 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 12 17:39:22.739471 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 12 17:39:22.739477 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 12 17:39:22.739483 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 12 17:39:22.739489 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 12 17:39:22.739494 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 12 17:39:22.739501 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 12 17:39:22.739506 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 12 17:39:22.739512 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 12 17:39:22.739518 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 12 17:39:22.739523 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 12 17:39:22.739529 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 12 17:39:22.739535 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:39:22.739542 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:39:22.739548 kernel: random: crng init done Sep 12 17:39:22.739554 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 12 17:39:22.739560 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 12 17:39:22.739565 kernel: printk: log_buf_len min size: 262144 bytes Sep 12 17:39:22.739571 kernel: printk: log_buf_len: 1048576 bytes Sep 12 17:39:22.739577 kernel: printk: early log buf free: 239648(91%) Sep 12 17:39:22.739583 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:39:22.739588 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:39:22.739595 kernel: Fallback order for Node 0: 0 Sep 12 17:39:22.739601 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Sep 12 17:39:22.739607 kernel: Policy zone: DMA32 Sep 12 17:39:22.739613 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:39:22.739619 kernel: Memory: 1936396K/2096628K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 159972K reserved, 0K cma-reserved) Sep 12 17:39:22.739625 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 12 17:39:22.739632 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:39:22.739638 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:39:22.739644 kernel: Dynamic Preempt: voluntary Sep 12 17:39:22.739650 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:39:22.739656 kernel: rcu: RCU event tracing is enabled. Sep 12 17:39:22.739662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 12 17:39:22.739667 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:39:22.739673 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:39:22.739679 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:39:22.739685 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:39:22.739692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 12 17:39:22.739698 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 12 17:39:22.739703 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 12 17:39:22.739709 kernel: Console: colour VGA+ 80x25 Sep 12 17:39:22.739715 kernel: printk: console [tty0] enabled Sep 12 17:39:22.739721 kernel: printk: console [ttyS0] enabled Sep 12 17:39:22.739727 kernel: ACPI: Core revision 20230628 Sep 12 17:39:22.739732 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 12 17:39:22.739738 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:39:22.739745 kernel: x2apic enabled Sep 12 17:39:22.739751 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:39:22.739757 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:39:22.739764 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 12 17:39:22.739770 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 12 17:39:22.739776 kernel: Disabled fast string operations Sep 12 17:39:22.739782 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:39:22.739788 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:39:22.739793 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:39:22.739800 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 17:39:22.739806 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 17:39:22.739812 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 12 17:39:22.739818 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 12 17:39:22.739824 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 12 17:39:22.739829 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:39:22.739835 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:39:22.739841 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:39:22.739847 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 12 17:39:22.739854 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:39:22.739860 kernel: active return thunk: its_return_thunk Sep 12 17:39:22.739866 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:39:22.739872 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:39:22.739877 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:39:22.739888 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:39:22.739894 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:39:22.739900 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:39:22.739906 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:39:22.739913 kernel: pid_max: default: 131072 minimum: 1024 Sep 12 17:39:22.739919 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:39:22.739925 kernel: landlock: Up and running. Sep 12 17:39:22.739930 kernel: SELinux: Initializing. Sep 12 17:39:22.739936 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:39:22.739942 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:39:22.739948 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 12 17:39:22.739954 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 12 17:39:22.739960 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 12 17:39:22.739967 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 12 17:39:22.739973 kernel: Performance Events: Skylake events, core PMU driver. Sep 12 17:39:22.739978 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 12 17:39:22.739984 kernel: core: CPUID marked event: 'instructions' unavailable Sep 12 17:39:22.739990 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 12 17:39:22.739995 kernel: core: CPUID marked event: 'cache references' unavailable Sep 12 17:39:22.740001 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 12 17:39:22.740006 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 12 17:39:22.740013 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 12 17:39:22.740019 kernel: ... version: 1 Sep 12 17:39:22.740024 kernel: ... bit width: 48 Sep 12 17:39:22.740030 kernel: ... generic registers: 4 Sep 12 17:39:22.740036 kernel: ... value mask: 0000ffffffffffff Sep 12 17:39:22.740042 kernel: ... max period: 000000007fffffff Sep 12 17:39:22.740047 kernel: ... fixed-purpose events: 0 Sep 12 17:39:22.740053 kernel: ... event mask: 000000000000000f Sep 12 17:39:22.740059 kernel: signal: max sigframe size: 1776 Sep 12 17:39:22.740066 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:39:22.740072 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:39:22.740077 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:39:22.740083 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:39:22.740089 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:39:22.740094 kernel: .... node #0, CPUs: #1 Sep 12 17:39:22.740100 kernel: Disabled fast string operations Sep 12 17:39:22.740106 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Sep 12 17:39:22.740112 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 12 17:39:22.740118 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:39:22.740125 kernel: smpboot: Max logical packages: 128 Sep 12 17:39:22.740131 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 12 17:39:22.740137 kernel: devtmpfs: initialized Sep 12 17:39:22.740142 kernel: x86/mm: Memory block size: 128MB Sep 12 17:39:22.740148 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 12 17:39:22.740154 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:39:22.740160 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 12 17:39:22.740166 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:39:22.740172 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:39:22.740179 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:39:22.740184 kernel: audit: type=2000 audit(1757698761.091:1): state=initialized audit_enabled=0 res=1 Sep 12 17:39:22.740190 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:39:22.740196 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:39:22.740202 kernel: cpuidle: using governor menu Sep 12 17:39:22.740208 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 12 17:39:22.740213 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:39:22.740219 kernel: dca service started, version 1.12.1 Sep 12 17:39:22.740225 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Sep 12 17:39:22.740232 kernel: PCI: Using configuration type 1 for base access Sep 12 17:39:22.740238 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:39:22.740244 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:39:22.740250 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:39:22.740256 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:39:22.740284 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:39:22.740291 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:39:22.740297 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:39:22.740302 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:39:22.740311 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:39:22.740316 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 12 17:39:22.740322 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:39:22.740328 kernel: ACPI: Interpreter enabled Sep 12 17:39:22.740333 kernel: ACPI: PM: (supports S0 S1 S5) Sep 12 17:39:22.740339 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:39:22.740345 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:39:22.740351 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:39:22.740357 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 12 17:39:22.740364 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 12 17:39:22.740449 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:39:22.740515 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 12 17:39:22.740565 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 12 17:39:22.740574 kernel: PCI host bridge to bus 0000:00 Sep 12 17:39:22.742500 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:39:22.742558 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 12 17:39:22.742606 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:39:22.742651 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:39:22.742696 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 12 17:39:22.742742 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 12 17:39:22.742802 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Sep 12 17:39:22.742859 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Sep 12 17:39:22.742927 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Sep 12 17:39:22.742984 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Sep 12 17:39:22.743036 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Sep 12 17:39:22.743087 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 12 17:39:22.743138 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 12 17:39:22.743189 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 12 17:39:22.743242 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 12 17:39:22.743308 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Sep 12 17:39:22.743359 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 12 17:39:22.743410 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 12 17:39:22.743464 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Sep 12 17:39:22.743516 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Sep 12 17:39:22.743569 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Sep 12 17:39:22.743624 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Sep 12 17:39:22.743675 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Sep 12 17:39:22.743725 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Sep 12 17:39:22.743774 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Sep 12 17:39:22.743827 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Sep 12 17:39:22.743877 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:39:22.743931 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Sep 12 17:39:22.743990 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.744042 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.744098 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.744149 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.744206 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746312 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746392 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746449 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746506 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746559 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746614 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746665 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746722 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746784 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746839 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746892 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746946 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746998 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747057 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747109 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747163 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747214 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747287 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747345 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747399 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747450 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747504 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747555 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747610 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747661 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747718 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747769 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747823 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747873 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747927 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747978 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.748035 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.748086 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.748143 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.748193 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.748248 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750334 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750399 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750453 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750508 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750561 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750641 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750705 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750764 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750815 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750871 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750923 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750977 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.751028 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.751082 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.751136 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.751194 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.751245 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.753331 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.753391 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.753449 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.753506 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.753562 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.753614 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.753669 kernel: pci_bus 0000:01: extended config space not accessible Sep 12 17:39:22.753723 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 17:39:22.753775 kernel: pci_bus 0000:02: extended config space not accessible Sep 12 17:39:22.753787 kernel: acpiphp: Slot [32] registered Sep 12 17:39:22.753793 kernel: acpiphp: Slot [33] registered Sep 12 17:39:22.753800 kernel: acpiphp: Slot [34] registered Sep 12 17:39:22.753805 kernel: acpiphp: Slot [35] registered Sep 12 17:39:22.753811 kernel: acpiphp: Slot [36] registered Sep 12 17:39:22.753817 kernel: acpiphp: Slot [37] registered Sep 12 17:39:22.753823 kernel: acpiphp: Slot [38] registered Sep 12 17:39:22.753829 kernel: acpiphp: Slot [39] registered Sep 12 17:39:22.753835 kernel: acpiphp: Slot [40] registered Sep 12 17:39:22.753842 kernel: acpiphp: Slot [41] registered Sep 12 17:39:22.753848 kernel: acpiphp: Slot [42] registered Sep 12 17:39:22.753853 kernel: acpiphp: Slot [43] registered Sep 12 17:39:22.753859 kernel: acpiphp: Slot [44] registered Sep 12 17:39:22.753865 kernel: acpiphp: Slot [45] registered Sep 12 17:39:22.753871 kernel: acpiphp: Slot [46] registered Sep 12 17:39:22.753877 kernel: acpiphp: Slot [47] registered Sep 12 17:39:22.753883 kernel: acpiphp: Slot [48] registered Sep 12 17:39:22.753888 kernel: acpiphp: Slot [49] registered Sep 12 17:39:22.753894 kernel: acpiphp: Slot [50] registered Sep 12 17:39:22.753901 kernel: acpiphp: Slot [51] registered Sep 12 17:39:22.753907 kernel: acpiphp: Slot [52] registered Sep 12 17:39:22.753912 kernel: acpiphp: Slot [53] registered Sep 12 17:39:22.753918 kernel: acpiphp: Slot [54] registered Sep 12 17:39:22.753924 kernel: acpiphp: Slot [55] registered Sep 12 17:39:22.753930 kernel: acpiphp: Slot [56] registered Sep 12 17:39:22.753935 kernel: acpiphp: Slot [57] registered Sep 12 17:39:22.753941 kernel: acpiphp: Slot [58] registered Sep 12 17:39:22.753947 kernel: acpiphp: Slot [59] registered Sep 12 17:39:22.753953 kernel: acpiphp: Slot [60] registered Sep 12 17:39:22.753959 kernel: acpiphp: Slot [61] registered Sep 12 17:39:22.753965 kernel: acpiphp: Slot [62] registered Sep 12 17:39:22.753971 kernel: acpiphp: Slot [63] registered Sep 12 17:39:22.754022 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 12 17:39:22.754072 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 12 17:39:22.754123 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 12 17:39:22.754174 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 12 17:39:22.754223 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 12 17:39:22.756346 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 12 17:39:22.756401 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 12 17:39:22.756451 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 12 17:39:22.756501 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 12 17:39:22.756558 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Sep 12 17:39:22.756610 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Sep 12 17:39:22.756662 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 12 17:39:22.756717 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 12 17:39:22.756768 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.756819 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 12 17:39:22.756871 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 12 17:39:22.756922 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 12 17:39:22.756972 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 12 17:39:22.757024 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 12 17:39:22.757077 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 12 17:39:22.757127 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 12 17:39:22.757177 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 12 17:39:22.757229 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 12 17:39:22.757288 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 12 17:39:22.757340 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 12 17:39:22.757390 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 12 17:39:22.757442 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 12 17:39:22.757497 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 12 17:39:22.757548 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 12 17:39:22.757600 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 12 17:39:22.757651 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 12 17:39:22.757701 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 12 17:39:22.757756 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 12 17:39:22.757807 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 12 17:39:22.757857 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 12 17:39:22.757907 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 12 17:39:22.757958 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 12 17:39:22.758008 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 12 17:39:22.758060 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 12 17:39:22.758113 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 12 17:39:22.758163 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 12 17:39:22.758220 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Sep 12 17:39:22.760297 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Sep 12 17:39:22.760359 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Sep 12 17:39:22.760414 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Sep 12 17:39:22.760468 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Sep 12 17:39:22.760521 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 12 17:39:22.760577 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 12 17:39:22.760629 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:39:22.760681 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 12 17:39:22.760733 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 12 17:39:22.760784 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 12 17:39:22.760835 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 12 17:39:22.760887 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 12 17:39:22.760942 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 12 17:39:22.760993 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 12 17:39:22.761042 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 12 17:39:22.761095 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 12 17:39:22.761145 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 12 17:39:22.761195 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 12 17:39:22.761246 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 12 17:39:22.763078 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 12 17:39:22.763140 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 12 17:39:22.763193 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 12 17:39:22.763246 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 12 17:39:22.763312 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 12 17:39:22.763364 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 12 17:39:22.763416 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 12 17:39:22.763467 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 12 17:39:22.763516 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 12 17:39:22.763571 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 12 17:39:22.763622 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 12 17:39:22.763671 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 12 17:39:22.763724 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 12 17:39:22.763775 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 12 17:39:22.763853 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 12 17:39:22.763910 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 12 17:39:22.763963 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 12 17:39:22.764017 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 12 17:39:22.764068 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 12 17:39:22.764121 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 12 17:39:22.764172 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 12 17:39:22.764222 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 12 17:39:22.764300 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 12 17:39:22.764355 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 12 17:39:22.764409 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 12 17:39:22.764459 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 12 17:39:22.764509 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 12 17:39:22.764561 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 12 17:39:22.764610 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 12 17:39:22.764661 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 12 17:39:22.764712 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 12 17:39:22.764763 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 12 17:39:22.764816 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 12 17:39:22.764868 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 12 17:39:22.764924 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 12 17:39:22.764974 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 12 17:39:22.765027 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 12 17:39:22.765077 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 12 17:39:22.765127 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 12 17:39:22.765180 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 12 17:39:22.765234 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 12 17:39:22.765291 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 12 17:39:22.765344 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 12 17:39:22.765394 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 12 17:39:22.765445 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 12 17:39:22.765497 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 12 17:39:22.765549 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 12 17:39:22.765600 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 12 17:39:22.765654 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 12 17:39:22.765704 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 12 17:39:22.765757 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 12 17:39:22.765807 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 12 17:39:22.765857 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 12 17:39:22.765908 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 12 17:39:22.765959 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 12 17:39:22.766010 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 12 17:39:22.766065 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 12 17:39:22.766116 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 12 17:39:22.766166 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 12 17:39:22.766217 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 12 17:39:22.766275 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 12 17:39:22.766326 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 12 17:39:22.766378 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 12 17:39:22.766429 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 12 17:39:22.766484 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 12 17:39:22.766535 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 12 17:39:22.766587 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 12 17:39:22.766637 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 12 17:39:22.766646 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 12 17:39:22.766652 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 12 17:39:22.766658 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 12 17:39:22.766665 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:39:22.766673 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 12 17:39:22.766679 kernel: iommu: Default domain type: Translated Sep 12 17:39:22.766684 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:39:22.766690 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:39:22.766696 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:39:22.766702 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 12 17:39:22.766708 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 12 17:39:22.766758 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 12 17:39:22.766809 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 12 17:39:22.766863 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:39:22.766872 kernel: vgaarb: loaded Sep 12 17:39:22.766878 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 12 17:39:22.766884 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 12 17:39:22.766890 kernel: clocksource: Switched to clocksource tsc-early Sep 12 17:39:22.766895 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:39:22.766901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:39:22.766907 kernel: pnp: PnP ACPI init Sep 12 17:39:22.766963 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 12 17:39:22.767013 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 12 17:39:22.767060 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 12 17:39:22.767110 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 12 17:39:22.767160 kernel: pnp 00:06: [dma 2] Sep 12 17:39:22.767210 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 12 17:39:22.767257 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 12 17:39:22.767320 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 12 17:39:22.767328 kernel: pnp: PnP ACPI: found 8 devices Sep 12 17:39:22.767334 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:39:22.767340 kernel: NET: Registered PF_INET protocol family Sep 12 17:39:22.767346 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:39:22.767353 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:39:22.767358 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:39:22.767365 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:39:22.767373 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:39:22.767379 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:39:22.767384 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:39:22.767390 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:39:22.767396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:39:22.767402 kernel: NET: Registered PF_XDP protocol family Sep 12 17:39:22.767454 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 12 17:39:22.767506 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 12 17:39:22.767562 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 17:39:22.767613 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 17:39:22.767665 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 17:39:22.767716 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 12 17:39:22.767768 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 12 17:39:22.767819 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 12 17:39:22.767874 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 12 17:39:22.767930 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 12 17:39:22.767982 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 12 17:39:22.768048 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 12 17:39:22.768100 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 12 17:39:22.768152 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 12 17:39:22.768208 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 12 17:39:22.768298 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 12 17:39:22.768353 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 12 17:39:22.768404 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 12 17:39:22.768455 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 12 17:39:22.768505 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 12 17:39:22.768560 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 12 17:39:22.768611 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 12 17:39:22.768662 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 12 17:39:22.768712 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Sep 12 17:39:22.768762 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Sep 12 17:39:22.768813 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.768868 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.768919 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.768970 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769021 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769073 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769124 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769174 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769224 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769285 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769336 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769387 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769437 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769488 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769539 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769589 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769641 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769694 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769756 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770036 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770136 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770240 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770316 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770372 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770426 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770483 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770536 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770589 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770642 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770694 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770747 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770801 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770853 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770909 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770962 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771015 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771067 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771120 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771173 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771225 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771384 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771435 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771490 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771541 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771591 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771641 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771690 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771739 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771788 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771838 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771890 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771943 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771995 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772045 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772095 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772145 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772196 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772246 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772310 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772362 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772415 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772466 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772517 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772567 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772617 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772667 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772717 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772767 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772817 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772878 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772937 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772987 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773037 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773088 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773138 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773188 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773239 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773396 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773449 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773503 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773554 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773604 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773655 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773705 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773757 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 17:39:22.773809 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 12 17:39:22.773859 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 12 17:39:22.773910 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 12 17:39:22.773960 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 12 17:39:22.774025 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Sep 12 17:39:22.774088 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 12 17:39:22.774140 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 12 17:39:22.774190 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 12 17:39:22.774240 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 12 17:39:22.774311 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 12 17:39:22.774363 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 12 17:39:22.774413 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 12 17:39:22.774467 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 12 17:39:22.774519 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 12 17:39:22.774569 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 12 17:39:22.774620 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 12 17:39:22.774670 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 12 17:39:22.774720 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 12 17:39:22.774770 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 12 17:39:22.774820 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 12 17:39:22.774871 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 12 17:39:22.774924 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 12 17:39:22.774975 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 12 17:39:22.775028 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 12 17:39:22.775079 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 12 17:39:22.775130 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 12 17:39:22.775246 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 12 17:39:22.775320 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 12 17:39:22.775372 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 12 17:39:22.775422 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 12 17:39:22.775472 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 12 17:39:22.775523 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 12 17:39:22.775577 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Sep 12 17:39:22.775629 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 12 17:39:22.775680 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 12 17:39:22.775731 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 12 17:39:22.775786 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 12 17:39:22.775838 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 12 17:39:22.775903 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 12 17:39:22.775955 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 12 17:39:22.776006 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 12 17:39:22.776058 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 12 17:39:22.776110 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 12 17:39:22.776161 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 12 17:39:22.776211 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 12 17:39:22.776271 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 12 17:39:22.776330 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 12 17:39:22.776381 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 12 17:39:22.776432 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 12 17:39:22.776482 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 12 17:39:22.776532 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 12 17:39:22.776583 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 12 17:39:22.776633 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 12 17:39:22.776684 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 12 17:39:22.776735 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 12 17:39:22.776788 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 12 17:39:22.776839 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 12 17:39:22.776890 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 12 17:39:22.776940 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 12 17:39:22.776990 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 12 17:39:22.777041 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 12 17:39:22.777100 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 12 17:39:22.777157 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 12 17:39:22.777208 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 12 17:39:22.777265 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 12 17:39:22.777321 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 12 17:39:22.777372 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 12 17:39:22.777423 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 12 17:39:22.777474 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 12 17:39:22.777525 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 12 17:39:22.777576 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 12 17:39:22.777627 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 12 17:39:22.777693 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 12 17:39:22.777745 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 12 17:39:22.777799 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 12 17:39:22.777851 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 12 17:39:22.777902 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 12 17:39:22.777952 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 12 17:39:22.778004 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 12 17:39:22.778055 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 12 17:39:22.778106 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 12 17:39:22.778157 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 12 17:39:22.778253 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 12 17:39:22.778316 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 12 17:39:22.778371 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 12 17:39:22.778423 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 12 17:39:22.778474 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 12 17:39:22.778525 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 12 17:39:22.778575 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 12 17:39:22.778627 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 12 17:39:22.778678 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 12 17:39:22.778730 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 12 17:39:22.778781 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 12 17:39:22.778835 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 12 17:39:22.778885 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 12 17:39:22.778936 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 12 17:39:22.778987 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 12 17:39:22.779038 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 12 17:39:22.779089 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 12 17:39:22.779140 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 12 17:39:22.779191 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 12 17:39:22.779243 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 12 17:39:22.779314 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 12 17:39:22.779370 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 12 17:39:22.779423 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 12 17:39:22.779474 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 12 17:39:22.779524 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 12 17:39:22.779575 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 12 17:39:22.779627 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 12 17:39:22.779677 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 12 17:39:22.779728 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 12 17:39:22.779778 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 12 17:39:22.779832 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 12 17:39:22.779891 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 12 17:39:22.779942 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 12 17:39:22.779988 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 12 17:39:22.780034 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 12 17:39:22.780079 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 12 17:39:22.780130 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 12 17:39:22.780177 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 12 17:39:22.780227 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 12 17:39:22.780370 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 12 17:39:22.780417 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 12 17:39:22.780463 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 12 17:39:22.780518 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 12 17:39:22.780578 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 12 17:39:22.780631 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 12 17:39:22.780682 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 12 17:39:22.780727 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 12 17:39:22.780778 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 12 17:39:22.780825 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 12 17:39:22.780870 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 12 17:39:22.780923 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 12 17:39:22.780969 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 12 17:39:22.781019 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 12 17:39:22.781069 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 12 17:39:22.781116 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 12 17:39:22.781167 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 12 17:39:22.781213 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 12 17:39:22.781363 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 12 17:39:22.781417 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 12 17:39:22.781468 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 12 17:39:22.781515 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 12 17:39:22.781570 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 12 17:39:22.781626 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 12 17:39:22.781680 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 12 17:39:22.781730 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 12 17:39:22.781777 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 12 17:39:22.781828 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 12 17:39:22.781875 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 12 17:39:22.781922 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 12 17:39:22.781977 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 12 17:39:22.782028 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 12 17:39:22.782078 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 12 17:39:22.782129 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 12 17:39:22.782177 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 12 17:39:22.782227 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 12 17:39:22.783749 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 12 17:39:22.783813 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 12 17:39:22.783868 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 12 17:39:22.783919 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 12 17:39:22.783967 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 12 17:39:22.784018 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 12 17:39:22.784065 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 12 17:39:22.784116 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 12 17:39:22.784168 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 12 17:39:22.784215 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 12 17:39:22.784303 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 12 17:39:22.784355 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 12 17:39:22.784402 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 12 17:39:22.784456 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 12 17:39:22.784503 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 12 17:39:22.784553 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 12 17:39:22.784603 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 12 17:39:22.784651 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 12 17:39:22.784702 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 12 17:39:22.784749 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 12 17:39:22.784800 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 12 17:39:22.784852 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 12 17:39:22.784904 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 12 17:39:22.784952 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 12 17:39:22.785003 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 12 17:39:22.785050 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 12 17:39:22.785104 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 12 17:39:22.785155 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 12 17:39:22.785202 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 12 17:39:22.785253 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 12 17:39:22.785574 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 12 17:39:22.785624 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 12 17:39:22.785680 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 12 17:39:22.785732 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 12 17:39:22.785803 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 12 17:39:22.786098 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 12 17:39:22.786156 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 12 17:39:22.786205 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 12 17:39:22.786258 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 12 17:39:22.786339 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 12 17:39:22.786615 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 12 17:39:22.786666 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 12 17:39:22.786719 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 12 17:39:22.786767 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 12 17:39:22.786825 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:39:22.786835 kernel: PCI: CLS 32 bytes, default 64 Sep 12 17:39:22.786844 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:39:22.786850 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 12 17:39:22.786857 kernel: clocksource: Switched to clocksource tsc Sep 12 17:39:22.786863 kernel: Initialise system trusted keyrings Sep 12 17:39:22.786870 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:39:22.786876 kernel: Key type asymmetric registered Sep 12 17:39:22.786882 kernel: Asymmetric key parser 'x509' registered Sep 12 17:39:22.786889 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:39:22.786895 kernel: io scheduler mq-deadline registered Sep 12 17:39:22.786902 kernel: io scheduler kyber registered Sep 12 17:39:22.786909 kernel: io scheduler bfq registered Sep 12 17:39:22.786964 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 12 17:39:22.787029 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787292 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 12 17:39:22.787358 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787413 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 12 17:39:22.787466 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787523 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 12 17:39:22.787575 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787628 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 12 17:39:22.787679 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787740 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 12 17:39:22.787812 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787869 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 12 17:39:22.787922 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787974 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 12 17:39:22.788027 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.788079 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 12 17:39:22.788133 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.788185 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 12 17:39:22.788236 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789321 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 12 17:39:22.789383 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789439 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 12 17:39:22.789493 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789552 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 12 17:39:22.789607 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789660 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 12 17:39:22.789712 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789764 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 12 17:39:22.789818 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789872 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 12 17:39:22.789936 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789990 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 12 17:39:22.790042 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790095 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 12 17:39:22.790151 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790204 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 12 17:39:22.790257 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790335 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 12 17:39:22.790387 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790440 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 12 17:39:22.790494 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790548 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 12 17:39:22.790600 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790653 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 12 17:39:22.790705 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790759 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 12 17:39:22.790815 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790868 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 12 17:39:22.790919 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790972 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 12 17:39:22.791024 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.791078 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 12 17:39:22.791133 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.791186 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 12 17:39:22.791238 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792275 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 12 17:39:22.792341 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792401 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 12 17:39:22.792454 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792509 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 12 17:39:22.792562 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792616 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 12 17:39:22.792669 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792681 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:39:22.792688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:39:22.792696 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:39:22.792703 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 12 17:39:22.792709 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:39:22.792715 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:39:22.792769 kernel: rtc_cmos 00:01: registered as rtc0 Sep 12 17:39:22.792821 kernel: rtc_cmos 00:01: setting system clock to 2025-09-12T17:39:22 UTC (1757698762) Sep 12 17:39:22.792869 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 12 17:39:22.792878 kernel: intel_pstate: CPU model not supported Sep 12 17:39:22.792889 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:39:22.792895 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:39:22.792902 kernel: Segment Routing with IPv6 Sep 12 17:39:22.792908 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:39:22.792915 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:39:22.792921 kernel: Key type dns_resolver registered Sep 12 17:39:22.792929 kernel: IPI shorthand broadcast: enabled Sep 12 17:39:22.792935 kernel: sched_clock: Marking stable (912003466, 225449726)->(1197405082, -59951890) Sep 12 17:39:22.792942 kernel: registered taskstats version 1 Sep 12 17:39:22.792948 kernel: Loading compiled-in X.509 certificates Sep 12 17:39:22.792954 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:39:22.792960 kernel: Key type .fscrypt registered Sep 12 17:39:22.792967 kernel: Key type fscrypt-provisioning registered Sep 12 17:39:22.792973 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:39:22.792980 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:39:22.792987 kernel: ima: No architecture policies found Sep 12 17:39:22.792993 kernel: clk: Disabling unused clocks Sep 12 17:39:22.792999 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:39:22.793005 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:39:22.793012 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:39:22.793018 kernel: Run /init as init process Sep 12 17:39:22.793024 kernel: with arguments: Sep 12 17:39:22.793031 kernel: /init Sep 12 17:39:22.793037 kernel: with environment: Sep 12 17:39:22.793044 kernel: HOME=/ Sep 12 17:39:22.793050 kernel: TERM=linux Sep 12 17:39:22.793056 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:39:22.793063 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:39:22.793072 systemd[1]: Detected virtualization vmware. Sep 12 17:39:22.793079 systemd[1]: Detected architecture x86-64. Sep 12 17:39:22.793085 systemd[1]: Running in initrd. Sep 12 17:39:22.793091 systemd[1]: No hostname configured, using default hostname. Sep 12 17:39:22.793099 systemd[1]: Hostname set to . Sep 12 17:39:22.793106 systemd[1]: Initializing machine ID from random generator. Sep 12 17:39:22.793112 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:39:22.793119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:39:22.793125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:39:22.793132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:39:22.793139 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:39:22.793146 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:39:22.793153 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:39:22.793161 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:39:22.793168 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:39:22.793369 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:39:22.793376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:39:22.793383 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:39:22.793392 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:39:22.793399 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:39:22.793405 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:39:22.793412 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:39:22.793418 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:39:22.793425 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:39:22.793432 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:39:22.793438 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:39:22.793445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:39:22.793453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:39:22.793460 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:39:22.795227 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:39:22.795238 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:39:22.795245 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:39:22.795252 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:39:22.795529 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:39:22.795540 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:39:22.795547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:39:22.795557 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:39:22.795578 systemd-journald[216]: Collecting audit messages is disabled. Sep 12 17:39:22.795595 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:39:22.795602 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:39:22.795611 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:39:22.795617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:39:22.795624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:39:22.795631 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:39:22.795639 kernel: Bridge firewalling registered Sep 12 17:39:22.795645 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:39:22.795653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:39:22.795659 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:39:22.795666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:39:22.795672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:39:22.795679 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:39:22.795687 systemd-journald[216]: Journal started Sep 12 17:39:22.795702 systemd-journald[216]: Runtime Journal (/run/log/journal/b1386d98615146c98521a55ecaa87327) is 4.8M, max 38.7M, 33.8M free. Sep 12 17:39:22.751634 systemd-modules-load[217]: Inserted module 'overlay' Sep 12 17:39:22.777428 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 12 17:39:22.797945 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:39:22.798254 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:39:22.802354 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:39:22.803365 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:39:22.808886 dracut-cmdline[245]: dracut-dracut-053 Sep 12 17:39:22.812137 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:39:22.813700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:39:22.814745 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:39:22.841996 systemd-resolved[263]: Positive Trust Anchors: Sep 12 17:39:22.842007 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:39:22.842029 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:39:22.843957 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 12 17:39:22.844814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:39:22.844973 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:39:22.864287 kernel: SCSI subsystem initialized Sep 12 17:39:22.871274 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:39:22.878271 kernel: iscsi: registered transport (tcp) Sep 12 17:39:22.893272 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:39:22.893301 kernel: QLogic iSCSI HBA Driver Sep 12 17:39:22.913448 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:39:22.918431 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:39:22.933313 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:39:22.933356 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:39:22.934462 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:39:22.966282 kernel: raid6: avx2x4 gen() 51225 MB/s Sep 12 17:39:22.983281 kernel: raid6: avx2x2 gen() 52867 MB/s Sep 12 17:39:23.000470 kernel: raid6: avx2x1 gen() 44727 MB/s Sep 12 17:39:23.000511 kernel: raid6: using algorithm avx2x2 gen() 52867 MB/s Sep 12 17:39:23.018472 kernel: raid6: .... xor() 31125 MB/s, rmw enabled Sep 12 17:39:23.018517 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:39:23.032280 kernel: xor: automatically using best checksumming function avx Sep 12 17:39:23.131594 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:39:23.136144 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:39:23.141348 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:39:23.148882 systemd-udevd[433]: Using default interface naming scheme 'v255'. Sep 12 17:39:23.151448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:39:23.160369 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:39:23.167243 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation Sep 12 17:39:23.183913 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:39:23.187445 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:39:23.257744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:39:23.262376 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:39:23.271735 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:39:23.272576 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:39:23.273297 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:39:23.273584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:39:23.279387 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:39:23.286472 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:39:23.332280 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 12 17:39:23.337381 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Sep 12 17:39:23.342286 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 12 17:39:23.342446 kernel: vmw_pvscsi: using 64bit dma Sep 12 17:39:23.343582 kernel: vmw_pvscsi: max_id: 16 Sep 12 17:39:23.343606 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 12 17:39:23.348286 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 12 17:39:23.358284 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:39:23.358315 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 12 17:39:23.358324 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 12 17:39:23.358332 kernel: vmw_pvscsi: using MSI-X Sep 12 17:39:23.363276 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 12 17:39:23.365234 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 12 17:39:23.365350 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 12 17:39:23.367278 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 12 17:39:23.370472 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:39:23.370552 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:39:23.370780 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:39:23.370881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:39:23.370954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:39:23.371069 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:39:23.379130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:39:23.382345 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:39:23.383364 kernel: libata version 3.00 loaded. Sep 12 17:39:23.384308 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 12 17:39:23.385321 kernel: scsi host1: ata_piix Sep 12 17:39:23.387329 kernel: AES CTR mode by8 optimization enabled Sep 12 17:39:23.392280 kernel: scsi host2: ata_piix Sep 12 17:39:23.393293 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 12 17:39:23.393385 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:39:23.393553 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 12 17:39:23.393629 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 12 17:39:23.393693 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 12 17:39:23.396310 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Sep 12 17:39:23.396327 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Sep 12 17:39:23.403103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:39:23.406372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:39:23.413949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:39:23.457797 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:23.457837 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:39:23.570288 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 12 17:39:23.576303 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 12 17:39:23.604293 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 12 17:39:23.604464 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:39:23.617411 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:39:23.655278 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (481) Sep 12 17:39:23.657885 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 12 17:39:23.661406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 12 17:39:23.665866 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 12 17:39:23.670273 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (488) Sep 12 17:39:23.676392 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 12 17:39:23.676736 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 12 17:39:23.680405 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:39:23.712277 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:23.718308 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:23.723277 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:24.726295 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:24.726365 disk-uuid[589]: The operation has completed successfully. Sep 12 17:39:24.793859 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:39:24.793918 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:39:24.796344 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:39:24.801888 sh[609]: Success Sep 12 17:39:24.811301 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:39:24.881488 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:39:24.882314 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:39:24.882623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:39:24.899575 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:39:24.899611 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:39:24.899621 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:39:24.900681 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:39:24.901475 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:39:24.910285 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:39:24.913081 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:39:24.923337 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 12 17:39:24.924538 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:39:24.941482 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:24.941518 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:39:24.941531 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:39:24.946301 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:39:24.952580 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:39:24.953886 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:24.958002 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:39:24.961358 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:39:25.000491 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 12 17:39:25.008614 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:39:25.040782 ignition[668]: Ignition 2.19.0 Sep 12 17:39:25.041218 ignition[668]: Stage: fetch-offline Sep 12 17:39:25.041245 ignition[668]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:25.041252 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:25.041541 ignition[668]: parsed url from cmdline: "" Sep 12 17:39:25.041544 ignition[668]: no config URL provided Sep 12 17:39:25.041547 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:39:25.041552 ignition[668]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:39:25.041946 ignition[668]: config successfully fetched Sep 12 17:39:25.041965 ignition[668]: parsing config with SHA512: c9d82a9c54bac3b163667ad229dd43ae7adc35eacedf588328a94a585299761a2c27b7df982233a494539bdc478abbd10a2c7711a99164f45bfb3ceafdef6614 Sep 12 17:39:25.045409 unknown[668]: fetched base config from "system" Sep 12 17:39:25.045668 ignition[668]: fetch-offline: fetch-offline passed Sep 12 17:39:25.045416 unknown[668]: fetched user config from "vmware" Sep 12 17:39:25.045706 ignition[668]: Ignition finished successfully Sep 12 17:39:25.047446 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:39:25.074944 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:39:25.079369 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:39:25.091525 systemd-networkd[801]: lo: Link UP Sep 12 17:39:25.091786 systemd-networkd[801]: lo: Gained carrier Sep 12 17:39:25.092662 systemd-networkd[801]: Enumeration completed Sep 12 17:39:25.092882 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:39:25.093052 systemd[1]: Reached target network.target - Network. Sep 12 17:39:25.093181 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:39:25.093596 systemd-networkd[801]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 12 17:39:25.097422 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 12 17:39:25.097532 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 12 17:39:25.097194 systemd-networkd[801]: ens192: Link UP Sep 12 17:39:25.097196 systemd-networkd[801]: ens192: Gained carrier Sep 12 17:39:25.103185 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:39:25.111534 ignition[803]: Ignition 2.19.0 Sep 12 17:39:25.111541 ignition[803]: Stage: kargs Sep 12 17:39:25.111671 ignition[803]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:25.111678 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:25.112376 ignition[803]: kargs: kargs passed Sep 12 17:39:25.112408 ignition[803]: Ignition finished successfully Sep 12 17:39:25.113751 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:39:25.122413 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:39:25.129781 ignition[810]: Ignition 2.19.0 Sep 12 17:39:25.130034 ignition[810]: Stage: disks Sep 12 17:39:25.130135 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:25.130142 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:25.130764 ignition[810]: disks: disks passed Sep 12 17:39:25.130792 ignition[810]: Ignition finished successfully Sep 12 17:39:25.132122 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:39:25.132479 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:39:25.132732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:39:25.132994 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:39:25.133219 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:39:25.133457 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:39:25.137374 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:39:25.150109 systemd-fsck[819]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:39:25.151747 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:39:25.155319 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:39:25.213278 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:39:25.213319 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:39:25.213706 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:39:25.219336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:39:25.221317 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:39:25.221720 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:39:25.221747 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:39:25.221764 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:39:25.226430 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:39:25.228589 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:39:25.229975 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (828) Sep 12 17:39:25.230003 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:25.230979 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:39:25.230991 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:39:25.235281 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:39:25.236433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:39:25.259548 initrd-setup-root[852]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:39:25.262385 initrd-setup-root[859]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:39:25.264528 initrd-setup-root[866]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:39:25.266925 initrd-setup-root[873]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:39:25.322132 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:39:25.331422 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:39:25.333946 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:39:25.337335 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:25.349615 ignition[941]: INFO : Ignition 2.19.0 Sep 12 17:39:25.349941 ignition[941]: INFO : Stage: mount Sep 12 17:39:25.350175 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:25.350316 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:25.350990 ignition[941]: INFO : mount: mount passed Sep 12 17:39:25.351539 ignition[941]: INFO : Ignition finished successfully Sep 12 17:39:25.351802 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:39:25.352366 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:39:25.355339 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:39:25.413989 systemd-resolved[263]: Detected conflict on linux IN A 139.178.70.102 Sep 12 17:39:25.414303 systemd-resolved[263]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Sep 12 17:39:25.897464 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:39:25.902374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:39:25.979294 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (952) Sep 12 17:39:25.992186 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:25.992225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:39:25.992238 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:39:26.071285 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:39:26.076289 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:39:26.090910 ignition[968]: INFO : Ignition 2.19.0 Sep 12 17:39:26.090910 ignition[968]: INFO : Stage: files Sep 12 17:39:26.091239 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:26.091239 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:26.092053 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:39:26.099509 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:39:26.099509 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:39:26.123850 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:39:26.124166 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:39:26.124625 unknown[968]: wrote ssh authorized keys file for user: core Sep 12 17:39:26.125020 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:39:26.146968 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:39:26.146968 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:39:26.146968 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:39:26.146968 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:39:26.187851 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:39:26.331370 systemd-networkd[801]: ens192: Gained IPv6LL Sep 12 17:39:26.751213 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:39:27.226728 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:39:27.226728 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 12 17:39:27.226728 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 12 17:39:27.226728 ignition[968]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 12 17:39:27.227606 ignition[968]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:39:27.227803 ignition[968]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:39:27.227803 ignition[968]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 12 17:39:27.227803 ignition[968]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 12 17:39:27.227803 ignition[968]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:39:27.278382 ignition[968]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:39:27.281118 ignition[968]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:39:27.281323 ignition[968]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:39:27.281323 ignition[968]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:39:27.281323 ignition[968]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:39:27.282356 ignition[968]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:39:27.282356 ignition[968]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:39:27.282356 ignition[968]: INFO : files: files passed Sep 12 17:39:27.282356 ignition[968]: INFO : Ignition finished successfully Sep 12 17:39:27.282417 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:39:27.287609 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:39:27.289407 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:39:27.292188 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:39:27.292429 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:39:27.296562 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:39:27.296562 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:39:27.298088 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:39:27.299524 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:39:27.299975 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:39:27.303373 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:39:27.317662 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:39:27.317729 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:39:27.318026 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:39:27.318140 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:39:27.318344 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:39:27.318825 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:39:27.330058 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:39:27.334386 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:39:27.340577 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:39:27.340889 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:39:27.341048 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:39:27.341733 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:39:27.341836 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:39:27.342369 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:39:27.342610 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:39:27.342871 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:39:27.343118 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:39:27.343477 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:39:27.343880 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:39:27.344196 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:39:27.344591 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:39:27.344750 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:39:27.344882 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:39:27.344986 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:39:27.345067 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:39:27.345323 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:39:27.345623 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:39:27.345788 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:39:27.345895 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:39:27.346125 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:39:27.346242 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:39:27.346763 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:39:27.346870 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:39:27.347206 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:39:27.347433 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:39:27.353294 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:39:27.353555 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:39:27.353745 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:39:27.353916 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:39:27.353989 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:39:27.354201 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:39:27.354280 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:39:27.354499 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:39:27.354567 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:39:27.354800 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:39:27.354859 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:39:27.363437 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:39:27.363567 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:39:27.363662 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:39:27.366138 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:39:27.366268 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:39:27.366344 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:39:27.366584 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:39:27.366664 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:39:27.369191 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:39:27.369256 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:39:27.373088 ignition[1024]: INFO : Ignition 2.19.0 Sep 12 17:39:27.376683 ignition[1024]: INFO : Stage: umount Sep 12 17:39:27.376683 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:27.376683 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:27.376683 ignition[1024]: INFO : umount: umount passed Sep 12 17:39:27.376683 ignition[1024]: INFO : Ignition finished successfully Sep 12 17:39:27.376671 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:39:27.376738 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:39:27.378688 systemd[1]: Stopped target network.target - Network. Sep 12 17:39:27.378830 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:39:27.378863 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:39:27.379002 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:39:27.379025 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:39:27.379166 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:39:27.379187 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:39:27.379340 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:39:27.379362 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:39:27.380042 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:39:27.382118 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:39:27.382426 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:39:27.382481 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:39:27.382935 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:39:27.382967 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:39:27.389173 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:39:27.389284 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:39:27.389322 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:39:27.389453 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 12 17:39:27.389476 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 12 17:39:27.389630 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:39:27.390335 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:39:27.390635 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:39:27.391691 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:39:27.393674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:39:27.393724 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:39:27.393867 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:39:27.393899 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:39:27.394022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:39:27.394044 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:39:27.399867 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:39:27.400120 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:39:27.401493 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:39:27.401566 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:39:27.402075 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:39:27.402103 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:39:27.402227 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:39:27.402244 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:39:27.402365 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:39:27.402390 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:39:27.402544 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:39:27.402565 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:39:27.402701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:39:27.402724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:39:27.405431 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:39:27.405574 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:39:27.405604 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:39:27.405734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:39:27.405756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:39:27.409004 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:39:27.409216 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:39:27.502109 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:39:27.502211 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:39:27.502594 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:39:27.502768 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:39:27.502802 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:39:27.510359 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:39:27.559210 systemd[1]: Switching root. Sep 12 17:39:27.585384 systemd-journald[216]: Journal stopped Sep 12 17:39:22.737130 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:39:22.737146 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:39:22.737153 kernel: Disabled fast string operations Sep 12 17:39:22.737157 kernel: BIOS-provided physical RAM map: Sep 12 17:39:22.737161 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 12 17:39:22.737165 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 12 17:39:22.737171 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 12 17:39:22.737175 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 12 17:39:22.737179 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 12 17:39:22.737183 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 12 17:39:22.737187 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 12 17:39:22.737192 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 12 17:39:22.737196 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 12 17:39:22.737200 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 12 17:39:22.737206 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 12 17:39:22.737211 kernel: NX (Execute Disable) protection: active Sep 12 17:39:22.737216 kernel: APIC: Static calls initialized Sep 12 17:39:22.737220 kernel: SMBIOS 2.7 present. Sep 12 17:39:22.737225 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 12 17:39:22.737230 kernel: vmware: hypercall mode: 0x00 Sep 12 17:39:22.737235 kernel: Hypervisor detected: VMware Sep 12 17:39:22.737240 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 12 17:39:22.737245 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 12 17:39:22.737250 kernel: vmware: using clock offset of 2747830112 ns Sep 12 17:39:22.737255 kernel: tsc: Detected 3408.000 MHz processor Sep 12 17:39:22.737630 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:39:22.737639 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:39:22.737645 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 12 17:39:22.737650 kernel: total RAM covered: 3072M Sep 12 17:39:22.737654 kernel: Found optimal setting for mtrr clean up Sep 12 17:39:22.737660 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 12 17:39:22.737668 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 12 17:39:22.737673 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:39:22.737677 kernel: Using GB pages for direct mapping Sep 12 17:39:22.737683 kernel: ACPI: Early table checksum verification disabled Sep 12 17:39:22.737687 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 12 17:39:22.737692 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 12 17:39:22.737697 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 12 17:39:22.737702 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 12 17:39:22.737707 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 12 17:39:22.737715 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 12 17:39:22.737720 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 12 17:39:22.737725 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 12 17:39:22.737730 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 12 17:39:22.737735 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 12 17:39:22.737741 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 12 17:39:22.737746 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 12 17:39:22.737752 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 12 17:39:22.737757 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 12 17:39:22.737762 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 12 17:39:22.737767 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 12 17:39:22.737772 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 12 17:39:22.737777 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 12 17:39:22.737782 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 12 17:39:22.737787 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 12 17:39:22.737793 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 12 17:39:22.737798 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 12 17:39:22.737803 kernel: system APIC only can use physical flat Sep 12 17:39:22.737808 kernel: APIC: Switched APIC routing to: physical flat Sep 12 17:39:22.737814 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:39:22.737819 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 12 17:39:22.737824 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 12 17:39:22.737829 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 12 17:39:22.737834 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 12 17:39:22.737840 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 12 17:39:22.737845 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 12 17:39:22.737850 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 12 17:39:22.737855 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Sep 12 17:39:22.737860 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Sep 12 17:39:22.737865 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Sep 12 17:39:22.737870 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Sep 12 17:39:22.737875 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Sep 12 17:39:22.737880 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Sep 12 17:39:22.737885 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Sep 12 17:39:22.737891 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Sep 12 17:39:22.737896 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Sep 12 17:39:22.737901 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Sep 12 17:39:22.737906 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Sep 12 17:39:22.737911 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Sep 12 17:39:22.737916 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Sep 12 17:39:22.737920 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Sep 12 17:39:22.737925 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Sep 12 17:39:22.737931 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Sep 12 17:39:22.737936 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Sep 12 17:39:22.737941 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Sep 12 17:39:22.737947 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Sep 12 17:39:22.737952 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Sep 12 17:39:22.737957 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Sep 12 17:39:22.737962 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Sep 12 17:39:22.737967 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Sep 12 17:39:22.737972 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Sep 12 17:39:22.737977 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Sep 12 17:39:22.737982 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Sep 12 17:39:22.737987 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Sep 12 17:39:22.737992 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Sep 12 17:39:22.737998 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Sep 12 17:39:22.738003 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Sep 12 17:39:22.738008 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Sep 12 17:39:22.738013 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Sep 12 17:39:22.738018 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Sep 12 17:39:22.738023 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Sep 12 17:39:22.738028 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Sep 12 17:39:22.738033 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Sep 12 17:39:22.738038 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Sep 12 17:39:22.738043 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Sep 12 17:39:22.738049 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Sep 12 17:39:22.738054 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Sep 12 17:39:22.738059 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Sep 12 17:39:22.738064 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Sep 12 17:39:22.738069 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Sep 12 17:39:22.738074 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Sep 12 17:39:22.738079 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Sep 12 17:39:22.738084 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Sep 12 17:39:22.738089 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Sep 12 17:39:22.738093 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Sep 12 17:39:22.738100 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Sep 12 17:39:22.738104 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Sep 12 17:39:22.738110 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Sep 12 17:39:22.738119 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Sep 12 17:39:22.738124 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Sep 12 17:39:22.738130 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Sep 12 17:39:22.738135 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Sep 12 17:39:22.738140 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Sep 12 17:39:22.738145 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Sep 12 17:39:22.738152 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Sep 12 17:39:22.738157 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Sep 12 17:39:22.738163 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Sep 12 17:39:22.738168 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Sep 12 17:39:22.738173 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Sep 12 17:39:22.738178 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Sep 12 17:39:22.738184 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Sep 12 17:39:22.738189 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Sep 12 17:39:22.738194 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Sep 12 17:39:22.738199 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Sep 12 17:39:22.738206 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Sep 12 17:39:22.738211 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Sep 12 17:39:22.738216 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Sep 12 17:39:22.738222 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Sep 12 17:39:22.738227 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Sep 12 17:39:22.738232 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Sep 12 17:39:22.738238 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Sep 12 17:39:22.738243 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Sep 12 17:39:22.738248 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Sep 12 17:39:22.738253 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Sep 12 17:39:22.738299 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Sep 12 17:39:22.738305 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Sep 12 17:39:22.738310 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Sep 12 17:39:22.738316 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Sep 12 17:39:22.738321 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Sep 12 17:39:22.738326 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Sep 12 17:39:22.738331 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Sep 12 17:39:22.738337 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Sep 12 17:39:22.738342 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Sep 12 17:39:22.738347 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Sep 12 17:39:22.738355 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Sep 12 17:39:22.738360 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Sep 12 17:39:22.738366 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Sep 12 17:39:22.738371 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Sep 12 17:39:22.738376 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Sep 12 17:39:22.738381 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Sep 12 17:39:22.738387 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Sep 12 17:39:22.738392 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Sep 12 17:39:22.738397 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Sep 12 17:39:22.738402 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Sep 12 17:39:22.738408 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Sep 12 17:39:22.738415 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Sep 12 17:39:22.738420 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Sep 12 17:39:22.738425 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Sep 12 17:39:22.738431 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Sep 12 17:39:22.738436 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Sep 12 17:39:22.738441 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Sep 12 17:39:22.738446 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Sep 12 17:39:22.738452 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Sep 12 17:39:22.738457 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Sep 12 17:39:22.738462 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Sep 12 17:39:22.738469 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Sep 12 17:39:22.738474 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Sep 12 17:39:22.738479 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Sep 12 17:39:22.738484 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Sep 12 17:39:22.738490 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Sep 12 17:39:22.738495 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Sep 12 17:39:22.738500 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Sep 12 17:39:22.738505 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Sep 12 17:39:22.738511 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Sep 12 17:39:22.738516 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Sep 12 17:39:22.738523 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Sep 12 17:39:22.738528 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Sep 12 17:39:22.738534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:39:22.738539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 17:39:22.738545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 12 17:39:22.738550 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Sep 12 17:39:22.738556 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Sep 12 17:39:22.738562 kernel: Zone ranges: Sep 12 17:39:22.738567 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:39:22.738574 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 12 17:39:22.738579 kernel: Normal empty Sep 12 17:39:22.738585 kernel: Movable zone start for each node Sep 12 17:39:22.738590 kernel: Early memory node ranges Sep 12 17:39:22.738596 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 12 17:39:22.738601 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 12 17:39:22.738607 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 12 17:39:22.738612 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 12 17:39:22.738617 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:39:22.738623 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 12 17:39:22.738629 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 12 17:39:22.738635 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 12 17:39:22.738640 kernel: system APIC only can use physical flat Sep 12 17:39:22.738646 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 12 17:39:22.738651 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 12 17:39:22.738657 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 12 17:39:22.738662 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 12 17:39:22.738667 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 12 17:39:22.738672 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 12 17:39:22.738679 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 12 17:39:22.738684 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 12 17:39:22.738690 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 12 17:39:22.738695 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 12 17:39:22.738700 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 12 17:39:22.738706 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 12 17:39:22.738711 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 12 17:39:22.738716 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 12 17:39:22.738722 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 12 17:39:22.738727 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 12 17:39:22.738733 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 12 17:39:22.738739 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 12 17:39:22.738744 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 12 17:39:22.738749 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 12 17:39:22.738755 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 12 17:39:22.738760 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 12 17:39:22.738765 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 12 17:39:22.738771 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 12 17:39:22.738776 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 12 17:39:22.738782 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 12 17:39:22.738788 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 12 17:39:22.738793 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 12 17:39:22.738799 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 12 17:39:22.738804 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 12 17:39:22.738809 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 12 17:39:22.738815 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 12 17:39:22.738820 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 12 17:39:22.738825 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 12 17:39:22.738830 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 12 17:39:22.738837 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 12 17:39:22.738842 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 12 17:39:22.738848 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 12 17:39:22.738853 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 12 17:39:22.738858 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 12 17:39:22.738864 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 12 17:39:22.738869 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 12 17:39:22.738874 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 12 17:39:22.738884 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 12 17:39:22.738892 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 12 17:39:22.738899 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 12 17:39:22.738905 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 12 17:39:22.738910 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 12 17:39:22.738915 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 12 17:39:22.738921 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 12 17:39:22.738926 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 12 17:39:22.738932 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 12 17:39:22.738937 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 12 17:39:22.738942 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 12 17:39:22.738948 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 12 17:39:22.738954 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 12 17:39:22.738960 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 12 17:39:22.738965 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 12 17:39:22.738971 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 12 17:39:22.738976 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 12 17:39:22.738981 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 12 17:39:22.738987 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 12 17:39:22.738992 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 12 17:39:22.738997 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 12 17:39:22.739004 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 12 17:39:22.739009 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 12 17:39:22.739014 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 12 17:39:22.739020 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 12 17:39:22.739025 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 12 17:39:22.739030 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 12 17:39:22.739036 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 12 17:39:22.739041 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 12 17:39:22.739047 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 12 17:39:22.739052 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 12 17:39:22.739059 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 12 17:39:22.739064 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 12 17:39:22.739069 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 12 17:39:22.739074 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 12 17:39:22.739080 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 12 17:39:22.739085 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 12 17:39:22.739090 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 12 17:39:22.739095 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 12 17:39:22.739101 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 12 17:39:22.739106 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 12 17:39:22.739112 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 12 17:39:22.739118 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 12 17:39:22.739123 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 12 17:39:22.739128 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 12 17:39:22.739134 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 12 17:39:22.739139 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 12 17:39:22.739144 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 12 17:39:22.739149 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 12 17:39:22.739155 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 12 17:39:22.739161 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 12 17:39:22.739167 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 12 17:39:22.739172 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 12 17:39:22.739177 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 12 17:39:22.739182 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 12 17:39:22.739188 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 12 17:39:22.739193 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 12 17:39:22.739198 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 12 17:39:22.739204 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 12 17:39:22.739209 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 12 17:39:22.739215 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 12 17:39:22.739221 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 12 17:39:22.739226 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 12 17:39:22.739231 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 12 17:39:22.739237 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 12 17:39:22.739242 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 12 17:39:22.739247 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 12 17:39:22.739253 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 12 17:39:22.739264 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 12 17:39:22.739278 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 12 17:39:22.739287 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 12 17:39:22.739292 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 12 17:39:22.739298 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 12 17:39:22.739303 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 12 17:39:22.739308 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 12 17:39:22.739314 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 12 17:39:22.739319 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 12 17:39:22.739324 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 12 17:39:22.739330 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 12 17:39:22.739336 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 12 17:39:22.739342 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 12 17:39:22.739347 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 12 17:39:22.739352 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 12 17:39:22.739358 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 12 17:39:22.739363 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 12 17:39:22.739368 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:39:22.739374 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 12 17:39:22.739379 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:39:22.739385 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 12 17:39:22.739391 kernel: TSC deadline timer available Sep 12 17:39:22.739397 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Sep 12 17:39:22.739402 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 12 17:39:22.739408 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 12 17:39:22.739414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:39:22.739419 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 12 17:39:22.739425 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 12 17:39:22.739430 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 12 17:39:22.739435 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 12 17:39:22.739442 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 12 17:39:22.739448 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 12 17:39:22.739453 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 12 17:39:22.739458 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 12 17:39:22.739471 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 12 17:39:22.739477 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 12 17:39:22.739483 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 12 17:39:22.739489 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 12 17:39:22.739494 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 12 17:39:22.739501 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 12 17:39:22.739506 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 12 17:39:22.739512 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 12 17:39:22.739518 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 12 17:39:22.739523 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 12 17:39:22.739529 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 12 17:39:22.739535 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:39:22.739542 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:39:22.739548 kernel: random: crng init done Sep 12 17:39:22.739554 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 12 17:39:22.739560 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 12 17:39:22.739565 kernel: printk: log_buf_len min size: 262144 bytes Sep 12 17:39:22.739571 kernel: printk: log_buf_len: 1048576 bytes Sep 12 17:39:22.739577 kernel: printk: early log buf free: 239648(91%) Sep 12 17:39:22.739583 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:39:22.739588 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:39:22.739595 kernel: Fallback order for Node 0: 0 Sep 12 17:39:22.739601 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Sep 12 17:39:22.739607 kernel: Policy zone: DMA32 Sep 12 17:39:22.739613 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:39:22.739619 kernel: Memory: 1936396K/2096628K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 159972K reserved, 0K cma-reserved) Sep 12 17:39:22.739625 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 12 17:39:22.739632 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:39:22.739638 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:39:22.739644 kernel: Dynamic Preempt: voluntary Sep 12 17:39:22.739650 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:39:22.739656 kernel: rcu: RCU event tracing is enabled. Sep 12 17:39:22.739662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 12 17:39:22.739667 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:39:22.739673 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:39:22.739679 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:39:22.739685 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:39:22.739692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 12 17:39:22.739698 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 12 17:39:22.739703 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 12 17:39:22.739709 kernel: Console: colour VGA+ 80x25 Sep 12 17:39:22.739715 kernel: printk: console [tty0] enabled Sep 12 17:39:22.739721 kernel: printk: console [ttyS0] enabled Sep 12 17:39:22.739727 kernel: ACPI: Core revision 20230628 Sep 12 17:39:22.739732 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 12 17:39:22.739738 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:39:22.739745 kernel: x2apic enabled Sep 12 17:39:22.739751 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:39:22.739757 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:39:22.739764 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 12 17:39:22.739770 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 12 17:39:22.739776 kernel: Disabled fast string operations Sep 12 17:39:22.739782 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:39:22.739788 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:39:22.739793 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:39:22.739800 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 17:39:22.739806 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 17:39:22.739812 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 12 17:39:22.739818 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 12 17:39:22.739824 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 12 17:39:22.739829 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:39:22.739835 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:39:22.739841 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:39:22.739847 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 12 17:39:22.739854 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:39:22.739860 kernel: active return thunk: its_return_thunk Sep 12 17:39:22.739866 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:39:22.739872 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:39:22.739877 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:39:22.739888 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:39:22.739894 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:39:22.739900 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:39:22.739906 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:39:22.739913 kernel: pid_max: default: 131072 minimum: 1024 Sep 12 17:39:22.739919 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:39:22.739925 kernel: landlock: Up and running. Sep 12 17:39:22.739930 kernel: SELinux: Initializing. Sep 12 17:39:22.739936 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:39:22.739942 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:39:22.739948 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 12 17:39:22.739954 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 12 17:39:22.739960 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 12 17:39:22.739967 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 12 17:39:22.739973 kernel: Performance Events: Skylake events, core PMU driver. Sep 12 17:39:22.739978 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 12 17:39:22.739984 kernel: core: CPUID marked event: 'instructions' unavailable Sep 12 17:39:22.739990 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 12 17:39:22.739995 kernel: core: CPUID marked event: 'cache references' unavailable Sep 12 17:39:22.740001 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 12 17:39:22.740006 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 12 17:39:22.740013 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 12 17:39:22.740019 kernel: ... version: 1 Sep 12 17:39:22.740024 kernel: ... bit width: 48 Sep 12 17:39:22.740030 kernel: ... generic registers: 4 Sep 12 17:39:22.740036 kernel: ... value mask: 0000ffffffffffff Sep 12 17:39:22.740042 kernel: ... max period: 000000007fffffff Sep 12 17:39:22.740047 kernel: ... fixed-purpose events: 0 Sep 12 17:39:22.740053 kernel: ... event mask: 000000000000000f Sep 12 17:39:22.740059 kernel: signal: max sigframe size: 1776 Sep 12 17:39:22.740066 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:39:22.740072 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:39:22.740077 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:39:22.740083 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:39:22.740089 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:39:22.740094 kernel: .... node #0, CPUs: #1 Sep 12 17:39:22.740100 kernel: Disabled fast string operations Sep 12 17:39:22.740106 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Sep 12 17:39:22.740112 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 12 17:39:22.740118 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:39:22.740125 kernel: smpboot: Max logical packages: 128 Sep 12 17:39:22.740131 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 12 17:39:22.740137 kernel: devtmpfs: initialized Sep 12 17:39:22.740142 kernel: x86/mm: Memory block size: 128MB Sep 12 17:39:22.740148 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 12 17:39:22.740154 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:39:22.740160 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 12 17:39:22.740166 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:39:22.740172 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:39:22.740179 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:39:22.740184 kernel: audit: type=2000 audit(1757698761.091:1): state=initialized audit_enabled=0 res=1 Sep 12 17:39:22.740190 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:39:22.740196 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:39:22.740202 kernel: cpuidle: using governor menu Sep 12 17:39:22.740208 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 12 17:39:22.740213 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:39:22.740219 kernel: dca service started, version 1.12.1 Sep 12 17:39:22.740225 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Sep 12 17:39:22.740232 kernel: PCI: Using configuration type 1 for base access Sep 12 17:39:22.740238 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:39:22.740244 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:39:22.740250 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:39:22.740256 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:39:22.740284 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:39:22.740291 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:39:22.740297 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:39:22.740302 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:39:22.740311 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:39:22.740316 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 12 17:39:22.740322 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:39:22.740328 kernel: ACPI: Interpreter enabled Sep 12 17:39:22.740333 kernel: ACPI: PM: (supports S0 S1 S5) Sep 12 17:39:22.740339 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:39:22.740345 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:39:22.740351 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:39:22.740357 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 12 17:39:22.740364 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 12 17:39:22.740449 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:39:22.740515 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 12 17:39:22.740565 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 12 17:39:22.740574 kernel: PCI host bridge to bus 0000:00 Sep 12 17:39:22.742500 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:39:22.742558 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 12 17:39:22.742606 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:39:22.742651 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:39:22.742696 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 12 17:39:22.742742 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 12 17:39:22.742802 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Sep 12 17:39:22.742859 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Sep 12 17:39:22.742927 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Sep 12 17:39:22.742984 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Sep 12 17:39:22.743036 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Sep 12 17:39:22.743087 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 12 17:39:22.743138 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 12 17:39:22.743189 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 12 17:39:22.743242 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 12 17:39:22.743308 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Sep 12 17:39:22.743359 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 12 17:39:22.743410 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 12 17:39:22.743464 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Sep 12 17:39:22.743516 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Sep 12 17:39:22.743569 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Sep 12 17:39:22.743624 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Sep 12 17:39:22.743675 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Sep 12 17:39:22.743725 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Sep 12 17:39:22.743774 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Sep 12 17:39:22.743827 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Sep 12 17:39:22.743877 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:39:22.743931 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Sep 12 17:39:22.743990 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.744042 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.744098 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.744149 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.744206 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746312 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746392 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746449 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746506 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746559 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746614 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746665 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746722 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746784 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746839 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746892 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.746946 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.746998 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747057 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747109 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747163 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747214 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747287 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747345 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747399 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747450 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747504 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747555 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747610 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747661 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747718 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747769 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747823 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747873 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.747927 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.747978 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.748035 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.748086 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.748143 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.748193 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.748248 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750334 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750399 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750453 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750508 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750561 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750641 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750705 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750764 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750815 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750871 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.750923 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.750977 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.751028 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.751082 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.751136 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.751194 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.751245 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.753331 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.753391 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.753449 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.753506 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.753562 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Sep 12 17:39:22.753614 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.753669 kernel: pci_bus 0000:01: extended config space not accessible Sep 12 17:39:22.753723 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 17:39:22.753775 kernel: pci_bus 0000:02: extended config space not accessible Sep 12 17:39:22.753787 kernel: acpiphp: Slot [32] registered Sep 12 17:39:22.753793 kernel: acpiphp: Slot [33] registered Sep 12 17:39:22.753800 kernel: acpiphp: Slot [34] registered Sep 12 17:39:22.753805 kernel: acpiphp: Slot [35] registered Sep 12 17:39:22.753811 kernel: acpiphp: Slot [36] registered Sep 12 17:39:22.753817 kernel: acpiphp: Slot [37] registered Sep 12 17:39:22.753823 kernel: acpiphp: Slot [38] registered Sep 12 17:39:22.753829 kernel: acpiphp: Slot [39] registered Sep 12 17:39:22.753835 kernel: acpiphp: Slot [40] registered Sep 12 17:39:22.753842 kernel: acpiphp: Slot [41] registered Sep 12 17:39:22.753848 kernel: acpiphp: Slot [42] registered Sep 12 17:39:22.753853 kernel: acpiphp: Slot [43] registered Sep 12 17:39:22.753859 kernel: acpiphp: Slot [44] registered Sep 12 17:39:22.753865 kernel: acpiphp: Slot [45] registered Sep 12 17:39:22.753871 kernel: acpiphp: Slot [46] registered Sep 12 17:39:22.753877 kernel: acpiphp: Slot [47] registered Sep 12 17:39:22.753883 kernel: acpiphp: Slot [48] registered Sep 12 17:39:22.753888 kernel: acpiphp: Slot [49] registered Sep 12 17:39:22.753894 kernel: acpiphp: Slot [50] registered Sep 12 17:39:22.753901 kernel: acpiphp: Slot [51] registered Sep 12 17:39:22.753907 kernel: acpiphp: Slot [52] registered Sep 12 17:39:22.753912 kernel: acpiphp: Slot [53] registered Sep 12 17:39:22.753918 kernel: acpiphp: Slot [54] registered Sep 12 17:39:22.753924 kernel: acpiphp: Slot [55] registered Sep 12 17:39:22.753930 kernel: acpiphp: Slot [56] registered Sep 12 17:39:22.753935 kernel: acpiphp: Slot [57] registered Sep 12 17:39:22.753941 kernel: acpiphp: Slot [58] registered Sep 12 17:39:22.753947 kernel: acpiphp: Slot [59] registered Sep 12 17:39:22.753953 kernel: acpiphp: Slot [60] registered Sep 12 17:39:22.753959 kernel: acpiphp: Slot [61] registered Sep 12 17:39:22.753965 kernel: acpiphp: Slot [62] registered Sep 12 17:39:22.753971 kernel: acpiphp: Slot [63] registered Sep 12 17:39:22.754022 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 12 17:39:22.754072 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 12 17:39:22.754123 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 12 17:39:22.754174 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 12 17:39:22.754223 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 12 17:39:22.756346 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 12 17:39:22.756401 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 12 17:39:22.756451 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 12 17:39:22.756501 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 12 17:39:22.756558 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Sep 12 17:39:22.756610 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Sep 12 17:39:22.756662 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 12 17:39:22.756717 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 12 17:39:22.756768 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 12 17:39:22.756819 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 12 17:39:22.756871 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 12 17:39:22.756922 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 12 17:39:22.756972 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 12 17:39:22.757024 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 12 17:39:22.757077 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 12 17:39:22.757127 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 12 17:39:22.757177 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 12 17:39:22.757229 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 12 17:39:22.757288 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 12 17:39:22.757340 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 12 17:39:22.757390 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 12 17:39:22.757442 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 12 17:39:22.757497 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 12 17:39:22.757548 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 12 17:39:22.757600 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 12 17:39:22.757651 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 12 17:39:22.757701 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 12 17:39:22.757756 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 12 17:39:22.757807 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 12 17:39:22.757857 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 12 17:39:22.757907 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 12 17:39:22.757958 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 12 17:39:22.758008 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 12 17:39:22.758060 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 12 17:39:22.758113 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 12 17:39:22.758163 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 12 17:39:22.758220 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Sep 12 17:39:22.760297 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Sep 12 17:39:22.760359 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Sep 12 17:39:22.760414 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Sep 12 17:39:22.760468 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Sep 12 17:39:22.760521 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 12 17:39:22.760577 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 12 17:39:22.760629 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:39:22.760681 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 12 17:39:22.760733 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 12 17:39:22.760784 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 12 17:39:22.760835 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 12 17:39:22.760887 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 12 17:39:22.760942 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 12 17:39:22.760993 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 12 17:39:22.761042 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 12 17:39:22.761095 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 12 17:39:22.761145 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 12 17:39:22.761195 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 12 17:39:22.761246 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 12 17:39:22.763078 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 12 17:39:22.763140 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 12 17:39:22.763193 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 12 17:39:22.763246 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 12 17:39:22.763312 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 12 17:39:22.763364 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 12 17:39:22.763416 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 12 17:39:22.763467 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 12 17:39:22.763516 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 12 17:39:22.763571 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 12 17:39:22.763622 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 12 17:39:22.763671 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 12 17:39:22.763724 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 12 17:39:22.763775 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 12 17:39:22.763853 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 12 17:39:22.763910 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 12 17:39:22.763963 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 12 17:39:22.764017 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 12 17:39:22.764068 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 12 17:39:22.764121 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 12 17:39:22.764172 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 12 17:39:22.764222 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 12 17:39:22.764300 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 12 17:39:22.764355 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 12 17:39:22.764409 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 12 17:39:22.764459 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 12 17:39:22.764509 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 12 17:39:22.764561 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 12 17:39:22.764610 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 12 17:39:22.764661 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 12 17:39:22.764712 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 12 17:39:22.764763 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 12 17:39:22.764816 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 12 17:39:22.764868 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 12 17:39:22.764924 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 12 17:39:22.764974 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 12 17:39:22.765027 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 12 17:39:22.765077 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 12 17:39:22.765127 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 12 17:39:22.765180 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 12 17:39:22.765234 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 12 17:39:22.765291 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 12 17:39:22.765344 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 12 17:39:22.765394 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 12 17:39:22.765445 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 12 17:39:22.765497 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 12 17:39:22.765549 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 12 17:39:22.765600 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 12 17:39:22.765654 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 12 17:39:22.765704 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 12 17:39:22.765757 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 12 17:39:22.765807 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 12 17:39:22.765857 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 12 17:39:22.765908 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 12 17:39:22.765959 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 12 17:39:22.766010 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 12 17:39:22.766065 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 12 17:39:22.766116 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 12 17:39:22.766166 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 12 17:39:22.766217 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 12 17:39:22.766275 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 12 17:39:22.766326 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 12 17:39:22.766378 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 12 17:39:22.766429 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 12 17:39:22.766484 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 12 17:39:22.766535 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 12 17:39:22.766587 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 12 17:39:22.766637 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 12 17:39:22.766646 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 12 17:39:22.766652 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 12 17:39:22.766658 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 12 17:39:22.766665 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:39:22.766673 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 12 17:39:22.766679 kernel: iommu: Default domain type: Translated Sep 12 17:39:22.766684 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:39:22.766690 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:39:22.766696 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:39:22.766702 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 12 17:39:22.766708 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 12 17:39:22.766758 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 12 17:39:22.766809 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 12 17:39:22.766863 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:39:22.766872 kernel: vgaarb: loaded Sep 12 17:39:22.766878 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 12 17:39:22.766884 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 12 17:39:22.766890 kernel: clocksource: Switched to clocksource tsc-early Sep 12 17:39:22.766895 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:39:22.766901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:39:22.766907 kernel: pnp: PnP ACPI init Sep 12 17:39:22.766963 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 12 17:39:22.767013 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 12 17:39:22.767060 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 12 17:39:22.767110 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 12 17:39:22.767160 kernel: pnp 00:06: [dma 2] Sep 12 17:39:22.767210 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 12 17:39:22.767257 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 12 17:39:22.767320 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 12 17:39:22.767328 kernel: pnp: PnP ACPI: found 8 devices Sep 12 17:39:22.767334 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:39:22.767340 kernel: NET: Registered PF_INET protocol family Sep 12 17:39:22.767346 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:39:22.767353 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:39:22.767358 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:39:22.767365 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:39:22.767373 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:39:22.767379 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:39:22.767384 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:39:22.767390 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:39:22.767396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:39:22.767402 kernel: NET: Registered PF_XDP protocol family Sep 12 17:39:22.767454 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 12 17:39:22.767506 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 12 17:39:22.767562 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 17:39:22.767613 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 17:39:22.767665 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 17:39:22.767716 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 12 17:39:22.767768 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 12 17:39:22.767819 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 12 17:39:22.767874 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 12 17:39:22.767930 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 12 17:39:22.767982 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 12 17:39:22.768048 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 12 17:39:22.768100 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 12 17:39:22.768152 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 12 17:39:22.768208 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 12 17:39:22.768298 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 12 17:39:22.768353 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 12 17:39:22.768404 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 12 17:39:22.768455 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 12 17:39:22.768505 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 12 17:39:22.768560 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 12 17:39:22.768611 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 12 17:39:22.768662 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 12 17:39:22.768712 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Sep 12 17:39:22.768762 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Sep 12 17:39:22.768813 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.768868 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.768919 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.768970 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769021 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769073 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769124 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769174 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769224 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769285 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769336 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769387 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769437 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769488 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769539 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769589 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769641 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.769694 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.769756 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770036 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770136 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770240 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770316 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770372 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770426 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770483 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770536 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770589 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770642 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770694 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770747 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770801 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770853 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.770909 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.770962 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771015 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771067 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771120 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771173 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771225 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771384 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771435 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771490 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771541 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771591 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771641 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771690 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771739 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771788 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771838 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771890 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.771943 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.771995 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772045 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772095 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772145 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772196 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772246 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772310 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772362 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772415 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772466 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772517 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772567 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772617 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772667 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772717 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772767 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772817 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772878 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.772937 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.772987 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773037 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773088 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773138 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773188 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773239 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773396 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773449 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773503 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773554 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773604 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773655 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 12 17:39:22.773705 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 12 17:39:22.773757 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 17:39:22.773809 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 12 17:39:22.773859 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 12 17:39:22.773910 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 12 17:39:22.773960 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 12 17:39:22.774025 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Sep 12 17:39:22.774088 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 12 17:39:22.774140 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 12 17:39:22.774190 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 12 17:39:22.774240 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 12 17:39:22.774311 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 12 17:39:22.774363 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 12 17:39:22.774413 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 12 17:39:22.774467 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 12 17:39:22.774519 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 12 17:39:22.774569 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 12 17:39:22.774620 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 12 17:39:22.774670 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 12 17:39:22.774720 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 12 17:39:22.774770 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 12 17:39:22.774820 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 12 17:39:22.774871 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 12 17:39:22.774924 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 12 17:39:22.774975 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 12 17:39:22.775028 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 12 17:39:22.775079 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 12 17:39:22.775130 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 12 17:39:22.775246 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 12 17:39:22.775320 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 12 17:39:22.775372 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 12 17:39:22.775422 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 12 17:39:22.775472 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 12 17:39:22.775523 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 12 17:39:22.775577 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Sep 12 17:39:22.775629 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 12 17:39:22.775680 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 12 17:39:22.775731 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 12 17:39:22.775786 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 12 17:39:22.775838 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 12 17:39:22.775903 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 12 17:39:22.775955 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 12 17:39:22.776006 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 12 17:39:22.776058 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 12 17:39:22.776110 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 12 17:39:22.776161 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 12 17:39:22.776211 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 12 17:39:22.776271 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 12 17:39:22.776330 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 12 17:39:22.776381 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 12 17:39:22.776432 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 12 17:39:22.776482 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 12 17:39:22.776532 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 12 17:39:22.776583 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 12 17:39:22.776633 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 12 17:39:22.776684 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 12 17:39:22.776735 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 12 17:39:22.776788 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 12 17:39:22.776839 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 12 17:39:22.776890 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 12 17:39:22.776940 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 12 17:39:22.776990 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 12 17:39:22.777041 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 12 17:39:22.777100 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 12 17:39:22.777157 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 12 17:39:22.777208 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 12 17:39:22.777265 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 12 17:39:22.777321 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 12 17:39:22.777372 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 12 17:39:22.777423 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 12 17:39:22.777474 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 12 17:39:22.777525 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 12 17:39:22.777576 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 12 17:39:22.777627 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 12 17:39:22.777693 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 12 17:39:22.777745 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 12 17:39:22.777799 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 12 17:39:22.777851 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 12 17:39:22.777902 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 12 17:39:22.777952 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 12 17:39:22.778004 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 12 17:39:22.778055 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 12 17:39:22.778106 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 12 17:39:22.778157 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 12 17:39:22.778253 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 12 17:39:22.778316 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 12 17:39:22.778371 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 12 17:39:22.778423 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 12 17:39:22.778474 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 12 17:39:22.778525 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 12 17:39:22.778575 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 12 17:39:22.778627 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 12 17:39:22.778678 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 12 17:39:22.778730 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 12 17:39:22.778781 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 12 17:39:22.778835 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 12 17:39:22.778885 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 12 17:39:22.778936 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 12 17:39:22.778987 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 12 17:39:22.779038 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 12 17:39:22.779089 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 12 17:39:22.779140 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 12 17:39:22.779191 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 12 17:39:22.779243 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 12 17:39:22.779314 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 12 17:39:22.779370 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 12 17:39:22.779423 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 12 17:39:22.779474 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 12 17:39:22.779524 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 12 17:39:22.779575 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 12 17:39:22.779627 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 12 17:39:22.779677 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 12 17:39:22.779728 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 12 17:39:22.779778 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 12 17:39:22.779832 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 12 17:39:22.779891 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 12 17:39:22.779942 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 12 17:39:22.779988 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 12 17:39:22.780034 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 12 17:39:22.780079 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 12 17:39:22.780130 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 12 17:39:22.780177 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 12 17:39:22.780227 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 12 17:39:22.780370 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 12 17:39:22.780417 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 12 17:39:22.780463 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 12 17:39:22.780518 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 12 17:39:22.780578 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 12 17:39:22.780631 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 12 17:39:22.780682 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 12 17:39:22.780727 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 12 17:39:22.780778 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 12 17:39:22.780825 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 12 17:39:22.780870 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 12 17:39:22.780923 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 12 17:39:22.780969 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 12 17:39:22.781019 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 12 17:39:22.781069 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 12 17:39:22.781116 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 12 17:39:22.781167 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 12 17:39:22.781213 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 12 17:39:22.781363 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 12 17:39:22.781417 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 12 17:39:22.781468 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 12 17:39:22.781515 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 12 17:39:22.781570 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 12 17:39:22.781626 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 12 17:39:22.781680 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 12 17:39:22.781730 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 12 17:39:22.781777 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 12 17:39:22.781828 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 12 17:39:22.781875 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 12 17:39:22.781922 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 12 17:39:22.781977 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 12 17:39:22.782028 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 12 17:39:22.782078 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 12 17:39:22.782129 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 12 17:39:22.782177 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 12 17:39:22.782227 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 12 17:39:22.783749 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 12 17:39:22.783813 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 12 17:39:22.783868 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 12 17:39:22.783919 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 12 17:39:22.783967 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 12 17:39:22.784018 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 12 17:39:22.784065 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 12 17:39:22.784116 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 12 17:39:22.784168 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 12 17:39:22.784215 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 12 17:39:22.784303 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 12 17:39:22.784355 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 12 17:39:22.784402 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 12 17:39:22.784456 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 12 17:39:22.784503 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 12 17:39:22.784553 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 12 17:39:22.784603 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 12 17:39:22.784651 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 12 17:39:22.784702 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 12 17:39:22.784749 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 12 17:39:22.784800 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 12 17:39:22.784852 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 12 17:39:22.784904 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 12 17:39:22.784952 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 12 17:39:22.785003 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 12 17:39:22.785050 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 12 17:39:22.785104 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 12 17:39:22.785155 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 12 17:39:22.785202 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 12 17:39:22.785253 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 12 17:39:22.785574 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 12 17:39:22.785624 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 12 17:39:22.785680 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 12 17:39:22.785732 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 12 17:39:22.785803 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 12 17:39:22.786098 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 12 17:39:22.786156 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 12 17:39:22.786205 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 12 17:39:22.786258 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 12 17:39:22.786339 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 12 17:39:22.786615 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 12 17:39:22.786666 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 12 17:39:22.786719 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 12 17:39:22.786767 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 12 17:39:22.786825 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:39:22.786835 kernel: PCI: CLS 32 bytes, default 64 Sep 12 17:39:22.786844 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:39:22.786850 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 12 17:39:22.786857 kernel: clocksource: Switched to clocksource tsc Sep 12 17:39:22.786863 kernel: Initialise system trusted keyrings Sep 12 17:39:22.786870 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:39:22.786876 kernel: Key type asymmetric registered Sep 12 17:39:22.786882 kernel: Asymmetric key parser 'x509' registered Sep 12 17:39:22.786889 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:39:22.786895 kernel: io scheduler mq-deadline registered Sep 12 17:39:22.786902 kernel: io scheduler kyber registered Sep 12 17:39:22.786909 kernel: io scheduler bfq registered Sep 12 17:39:22.786964 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 12 17:39:22.787029 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787292 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 12 17:39:22.787358 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787413 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 12 17:39:22.787466 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787523 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 12 17:39:22.787575 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787628 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 12 17:39:22.787679 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787740 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 12 17:39:22.787812 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787869 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 12 17:39:22.787922 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.787974 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 12 17:39:22.788027 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.788079 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 12 17:39:22.788133 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.788185 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 12 17:39:22.788236 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789321 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 12 17:39:22.789383 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789439 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 12 17:39:22.789493 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789552 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 12 17:39:22.789607 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789660 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 12 17:39:22.789712 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789764 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 12 17:39:22.789818 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789872 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 12 17:39:22.789936 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.789990 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 12 17:39:22.790042 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790095 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 12 17:39:22.790151 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790204 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 12 17:39:22.790257 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790335 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 12 17:39:22.790387 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790440 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 12 17:39:22.790494 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790548 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 12 17:39:22.790600 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790653 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 12 17:39:22.790705 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790759 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 12 17:39:22.790815 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790868 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 12 17:39:22.790919 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.790972 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 12 17:39:22.791024 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.791078 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 12 17:39:22.791133 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.791186 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 12 17:39:22.791238 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792275 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 12 17:39:22.792341 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792401 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 12 17:39:22.792454 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792509 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 12 17:39:22.792562 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792616 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 12 17:39:22.792669 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 17:39:22.792681 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:39:22.792688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:39:22.792696 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:39:22.792703 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 12 17:39:22.792709 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:39:22.792715 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:39:22.792769 kernel: rtc_cmos 00:01: registered as rtc0 Sep 12 17:39:22.792821 kernel: rtc_cmos 00:01: setting system clock to 2025-09-12T17:39:22 UTC (1757698762) Sep 12 17:39:22.792869 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 12 17:39:22.792878 kernel: intel_pstate: CPU model not supported Sep 12 17:39:22.792889 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:39:22.792895 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:39:22.792902 kernel: Segment Routing with IPv6 Sep 12 17:39:22.792908 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:39:22.792915 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:39:22.792921 kernel: Key type dns_resolver registered Sep 12 17:39:22.792929 kernel: IPI shorthand broadcast: enabled Sep 12 17:39:22.792935 kernel: sched_clock: Marking stable (912003466, 225449726)->(1197405082, -59951890) Sep 12 17:39:22.792942 kernel: registered taskstats version 1 Sep 12 17:39:22.792948 kernel: Loading compiled-in X.509 certificates Sep 12 17:39:22.792954 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:39:22.792960 kernel: Key type .fscrypt registered Sep 12 17:39:22.792967 kernel: Key type fscrypt-provisioning registered Sep 12 17:39:22.792973 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:39:22.792980 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:39:22.792987 kernel: ima: No architecture policies found Sep 12 17:39:22.792993 kernel: clk: Disabling unused clocks Sep 12 17:39:22.792999 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:39:22.793005 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:39:22.793012 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:39:22.793018 kernel: Run /init as init process Sep 12 17:39:22.793024 kernel: with arguments: Sep 12 17:39:22.793031 kernel: /init Sep 12 17:39:22.793037 kernel: with environment: Sep 12 17:39:22.793044 kernel: HOME=/ Sep 12 17:39:22.793050 kernel: TERM=linux Sep 12 17:39:22.793056 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:39:22.793063 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:39:22.793072 systemd[1]: Detected virtualization vmware. Sep 12 17:39:22.793079 systemd[1]: Detected architecture x86-64. Sep 12 17:39:22.793085 systemd[1]: Running in initrd. Sep 12 17:39:22.793091 systemd[1]: No hostname configured, using default hostname. Sep 12 17:39:22.793099 systemd[1]: Hostname set to . Sep 12 17:39:22.793106 systemd[1]: Initializing machine ID from random generator. Sep 12 17:39:22.793112 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:39:22.793119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:39:22.793125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:39:22.793132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:39:22.793139 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:39:22.793146 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:39:22.793153 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:39:22.793161 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:39:22.793168 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:39:22.793369 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:39:22.793376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:39:22.793383 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:39:22.793392 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:39:22.793399 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:39:22.793405 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:39:22.793412 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:39:22.793418 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:39:22.793425 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:39:22.793432 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:39:22.793438 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:39:22.793445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:39:22.793453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:39:22.793460 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:39:22.795227 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:39:22.795238 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:39:22.795245 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:39:22.795252 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:39:22.795529 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:39:22.795540 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:39:22.795547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:39:22.795557 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:39:22.795578 systemd-journald[216]: Collecting audit messages is disabled. Sep 12 17:39:22.795595 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:39:22.795602 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:39:22.795611 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:39:22.795617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:39:22.795624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:39:22.795631 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:39:22.795639 kernel: Bridge firewalling registered Sep 12 17:39:22.795645 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:39:22.795653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:39:22.795659 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:39:22.795666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:39:22.795672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:39:22.795679 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:39:22.795687 systemd-journald[216]: Journal started Sep 12 17:39:22.795702 systemd-journald[216]: Runtime Journal (/run/log/journal/b1386d98615146c98521a55ecaa87327) is 4.8M, max 38.7M, 33.8M free. Sep 12 17:39:22.751634 systemd-modules-load[217]: Inserted module 'overlay' Sep 12 17:39:22.777428 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 12 17:39:22.797945 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:39:22.798254 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:39:22.802354 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:39:22.803365 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:39:22.808886 dracut-cmdline[245]: dracut-dracut-053 Sep 12 17:39:22.812137 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:39:22.813700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:39:22.814745 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:39:22.841996 systemd-resolved[263]: Positive Trust Anchors: Sep 12 17:39:22.842007 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:39:22.842029 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:39:22.843957 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 12 17:39:22.844814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:39:22.844973 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:39:22.864287 kernel: SCSI subsystem initialized Sep 12 17:39:22.871274 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:39:22.878271 kernel: iscsi: registered transport (tcp) Sep 12 17:39:22.893272 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:39:22.893301 kernel: QLogic iSCSI HBA Driver Sep 12 17:39:22.913448 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:39:22.918431 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:39:22.933313 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:39:22.933356 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:39:22.934462 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:39:22.966282 kernel: raid6: avx2x4 gen() 51225 MB/s Sep 12 17:39:22.983281 kernel: raid6: avx2x2 gen() 52867 MB/s Sep 12 17:39:23.000470 kernel: raid6: avx2x1 gen() 44727 MB/s Sep 12 17:39:23.000511 kernel: raid6: using algorithm avx2x2 gen() 52867 MB/s Sep 12 17:39:23.018472 kernel: raid6: .... xor() 31125 MB/s, rmw enabled Sep 12 17:39:23.018517 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:39:23.032280 kernel: xor: automatically using best checksumming function avx Sep 12 17:39:23.131594 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:39:23.136144 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:39:23.141348 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:39:23.148882 systemd-udevd[433]: Using default interface naming scheme 'v255'. Sep 12 17:39:23.151448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:39:23.160369 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:39:23.167243 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation Sep 12 17:39:23.183913 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:39:23.187445 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:39:23.257744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:39:23.262376 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:39:23.271735 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:39:23.272576 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:39:23.273297 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:39:23.273584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:39:23.279387 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:39:23.286472 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:39:23.332280 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 12 17:39:23.337381 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Sep 12 17:39:23.342286 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 12 17:39:23.342446 kernel: vmw_pvscsi: using 64bit dma Sep 12 17:39:23.343582 kernel: vmw_pvscsi: max_id: 16 Sep 12 17:39:23.343606 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 12 17:39:23.348286 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 12 17:39:23.358284 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:39:23.358315 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 12 17:39:23.358324 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 12 17:39:23.358332 kernel: vmw_pvscsi: using MSI-X Sep 12 17:39:23.363276 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 12 17:39:23.365234 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 12 17:39:23.365350 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 12 17:39:23.367278 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 12 17:39:23.370472 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:39:23.370552 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:39:23.370780 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:39:23.370881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:39:23.370954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:39:23.371069 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:39:23.379130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:39:23.382345 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:39:23.383364 kernel: libata version 3.00 loaded. Sep 12 17:39:23.384308 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 12 17:39:23.385321 kernel: scsi host1: ata_piix Sep 12 17:39:23.387329 kernel: AES CTR mode by8 optimization enabled Sep 12 17:39:23.392280 kernel: scsi host2: ata_piix Sep 12 17:39:23.393293 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 12 17:39:23.393385 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:39:23.393553 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 12 17:39:23.393629 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 12 17:39:23.393693 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 12 17:39:23.396310 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Sep 12 17:39:23.396327 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Sep 12 17:39:23.403103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:39:23.406372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:39:23.413949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:39:23.457797 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:23.457837 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:39:23.570288 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 12 17:39:23.576303 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 12 17:39:23.604293 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 12 17:39:23.604464 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:39:23.617411 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:39:23.655278 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (481) Sep 12 17:39:23.657885 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 12 17:39:23.661406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 12 17:39:23.665866 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 12 17:39:23.670273 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (488) Sep 12 17:39:23.676392 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 12 17:39:23.676736 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 12 17:39:23.680405 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:39:23.712277 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:23.718308 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:23.723277 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:24.726295 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:39:24.726365 disk-uuid[589]: The operation has completed successfully. Sep 12 17:39:24.793859 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:39:24.793918 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:39:24.796344 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:39:24.801888 sh[609]: Success Sep 12 17:39:24.811301 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:39:24.881488 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:39:24.882314 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:39:24.882623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:39:24.899575 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:39:24.899611 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:39:24.899621 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:39:24.900681 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:39:24.901475 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:39:24.910285 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:39:24.913081 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:39:24.923337 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 12 17:39:24.924538 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:39:24.941482 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:24.941518 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:39:24.941531 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:39:24.946301 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:39:24.952580 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:39:24.953886 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:24.958002 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:39:24.961358 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:39:25.000491 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 12 17:39:25.008614 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:39:25.040782 ignition[668]: Ignition 2.19.0 Sep 12 17:39:25.041218 ignition[668]: Stage: fetch-offline Sep 12 17:39:25.041245 ignition[668]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:25.041252 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:25.041541 ignition[668]: parsed url from cmdline: "" Sep 12 17:39:25.041544 ignition[668]: no config URL provided Sep 12 17:39:25.041547 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:39:25.041552 ignition[668]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:39:25.041946 ignition[668]: config successfully fetched Sep 12 17:39:25.041965 ignition[668]: parsing config with SHA512: c9d82a9c54bac3b163667ad229dd43ae7adc35eacedf588328a94a585299761a2c27b7df982233a494539bdc478abbd10a2c7711a99164f45bfb3ceafdef6614 Sep 12 17:39:25.045409 unknown[668]: fetched base config from "system" Sep 12 17:39:25.045668 ignition[668]: fetch-offline: fetch-offline passed Sep 12 17:39:25.045416 unknown[668]: fetched user config from "vmware" Sep 12 17:39:25.045706 ignition[668]: Ignition finished successfully Sep 12 17:39:25.047446 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:39:25.074944 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:39:25.079369 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:39:25.091525 systemd-networkd[801]: lo: Link UP Sep 12 17:39:25.091786 systemd-networkd[801]: lo: Gained carrier Sep 12 17:39:25.092662 systemd-networkd[801]: Enumeration completed Sep 12 17:39:25.092882 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:39:25.093052 systemd[1]: Reached target network.target - Network. Sep 12 17:39:25.093181 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:39:25.093596 systemd-networkd[801]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 12 17:39:25.097422 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 12 17:39:25.097532 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 12 17:39:25.097194 systemd-networkd[801]: ens192: Link UP Sep 12 17:39:25.097196 systemd-networkd[801]: ens192: Gained carrier Sep 12 17:39:25.103185 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:39:25.111534 ignition[803]: Ignition 2.19.0 Sep 12 17:39:25.111541 ignition[803]: Stage: kargs Sep 12 17:39:25.111671 ignition[803]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:25.111678 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:25.112376 ignition[803]: kargs: kargs passed Sep 12 17:39:25.112408 ignition[803]: Ignition finished successfully Sep 12 17:39:25.113751 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:39:25.122413 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:39:25.129781 ignition[810]: Ignition 2.19.0 Sep 12 17:39:25.130034 ignition[810]: Stage: disks Sep 12 17:39:25.130135 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:25.130142 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:25.130764 ignition[810]: disks: disks passed Sep 12 17:39:25.130792 ignition[810]: Ignition finished successfully Sep 12 17:39:25.132122 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:39:25.132479 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:39:25.132732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:39:25.132994 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:39:25.133219 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:39:25.133457 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:39:25.137374 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:39:25.150109 systemd-fsck[819]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:39:25.151747 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:39:25.155319 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:39:25.213278 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:39:25.213319 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:39:25.213706 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:39:25.219336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:39:25.221317 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:39:25.221720 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:39:25.221747 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:39:25.221764 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:39:25.226430 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:39:25.228589 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:39:25.229975 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (828) Sep 12 17:39:25.230003 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:25.230979 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:39:25.230991 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:39:25.235281 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:39:25.236433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:39:25.259548 initrd-setup-root[852]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:39:25.262385 initrd-setup-root[859]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:39:25.264528 initrd-setup-root[866]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:39:25.266925 initrd-setup-root[873]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:39:25.322132 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:39:25.331422 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:39:25.333946 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:39:25.337335 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:25.349615 ignition[941]: INFO : Ignition 2.19.0 Sep 12 17:39:25.349941 ignition[941]: INFO : Stage: mount Sep 12 17:39:25.350175 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:25.350316 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:25.350990 ignition[941]: INFO : mount: mount passed Sep 12 17:39:25.351539 ignition[941]: INFO : Ignition finished successfully Sep 12 17:39:25.351802 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:39:25.352366 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:39:25.355339 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:39:25.413989 systemd-resolved[263]: Detected conflict on linux IN A 139.178.70.102 Sep 12 17:39:25.414303 systemd-resolved[263]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Sep 12 17:39:25.897464 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:39:25.902374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:39:25.979294 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (952) Sep 12 17:39:25.992186 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:39:25.992225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:39:25.992238 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:39:26.071285 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:39:26.076289 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:39:26.090910 ignition[968]: INFO : Ignition 2.19.0 Sep 12 17:39:26.090910 ignition[968]: INFO : Stage: files Sep 12 17:39:26.091239 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:26.091239 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:26.092053 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:39:26.099509 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:39:26.099509 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:39:26.123850 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:39:26.124166 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:39:26.124625 unknown[968]: wrote ssh authorized keys file for user: core Sep 12 17:39:26.125020 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:39:26.146968 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:39:26.146968 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:39:26.146968 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:39:26.146968 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:39:26.187851 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:39:26.306446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:39:26.331370 systemd-networkd[801]: ens192: Gained IPv6LL Sep 12 17:39:26.751213 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:39:27.226728 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:39:27.226728 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 12 17:39:27.226728 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 12 17:39:27.226728 ignition[968]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 12 17:39:27.227606 ignition[968]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:39:27.227803 ignition[968]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:39:27.227803 ignition[968]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 12 17:39:27.227803 ignition[968]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 12 17:39:27.227803 ignition[968]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 12 17:39:27.228471 ignition[968]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:39:27.278382 ignition[968]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:39:27.281118 ignition[968]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:39:27.281323 ignition[968]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:39:27.281323 ignition[968]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:39:27.281323 ignition[968]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:39:27.282356 ignition[968]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:39:27.282356 ignition[968]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:39:27.282356 ignition[968]: INFO : files: files passed Sep 12 17:39:27.282356 ignition[968]: INFO : Ignition finished successfully Sep 12 17:39:27.282417 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:39:27.287609 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:39:27.289407 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:39:27.292188 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:39:27.292429 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:39:27.296562 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:39:27.296562 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:39:27.298088 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:39:27.299524 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:39:27.299975 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:39:27.303373 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:39:27.317662 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:39:27.317729 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:39:27.318026 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:39:27.318140 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:39:27.318344 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:39:27.318825 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:39:27.330058 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:39:27.334386 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:39:27.340577 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:39:27.340889 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:39:27.341048 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:39:27.341733 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:39:27.341836 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:39:27.342369 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:39:27.342610 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:39:27.342871 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:39:27.343118 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:39:27.343477 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:39:27.343880 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:39:27.344196 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:39:27.344591 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:39:27.344750 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:39:27.344882 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:39:27.344986 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:39:27.345067 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:39:27.345323 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:39:27.345623 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:39:27.345788 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:39:27.345895 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:39:27.346125 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:39:27.346242 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:39:27.346763 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:39:27.346870 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:39:27.347206 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:39:27.347433 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:39:27.353294 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:39:27.353555 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:39:27.353745 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:39:27.353916 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:39:27.353989 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:39:27.354201 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:39:27.354280 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:39:27.354499 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:39:27.354567 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:39:27.354800 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:39:27.354859 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:39:27.363437 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:39:27.363567 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:39:27.363662 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:39:27.366138 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:39:27.366268 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:39:27.366344 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:39:27.366584 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:39:27.366664 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:39:27.369191 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:39:27.369256 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:39:27.373088 ignition[1024]: INFO : Ignition 2.19.0 Sep 12 17:39:27.376683 ignition[1024]: INFO : Stage: umount Sep 12 17:39:27.376683 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:39:27.376683 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 17:39:27.376683 ignition[1024]: INFO : umount: umount passed Sep 12 17:39:27.376683 ignition[1024]: INFO : Ignition finished successfully Sep 12 17:39:27.376671 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:39:27.376738 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:39:27.378688 systemd[1]: Stopped target network.target - Network. Sep 12 17:39:27.378830 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:39:27.378863 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:39:27.379002 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:39:27.379025 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:39:27.379166 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:39:27.379187 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:39:27.379340 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:39:27.379362 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:39:27.380042 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:39:27.382118 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:39:27.382426 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:39:27.382481 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:39:27.382935 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:39:27.382967 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:39:27.389173 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:39:27.389284 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:39:27.389322 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:39:27.389453 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 12 17:39:27.389476 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 12 17:39:27.389630 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:39:27.390335 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:39:27.390635 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:39:27.391691 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:39:27.393674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:39:27.393724 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:39:27.393867 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:39:27.393899 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:39:27.394022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:39:27.394044 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:39:27.399867 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:39:27.400120 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:39:27.401493 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:39:27.401566 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:39:27.402075 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:39:27.402103 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:39:27.402227 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:39:27.402244 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:39:27.402365 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:39:27.402390 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:39:27.402544 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:39:27.402565 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:39:27.402701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:39:27.402724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:39:27.405431 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:39:27.405574 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:39:27.405604 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:39:27.405734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:39:27.405756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:39:27.409004 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:39:27.409216 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:39:27.502109 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:39:27.502211 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:39:27.502594 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:39:27.502768 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:39:27.502802 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:39:27.510359 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:39:27.559210 systemd[1]: Switching root. Sep 12 17:39:27.585384 systemd-journald[216]: Journal stopped Sep 12 17:39:29.178930 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Sep 12 17:39:29.178954 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:39:29.178962 kernel: SELinux: policy capability open_perms=1 Sep 12 17:39:29.178968 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:39:29.178973 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:39:29.178978 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:39:29.178986 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:39:29.178992 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:39:29.178998 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:39:29.179004 kernel: audit: type=1403 audit(1757698768.405:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:39:29.179011 systemd[1]: Successfully loaded SELinux policy in 35.324ms. Sep 12 17:39:29.179018 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.764ms. Sep 12 17:39:29.179025 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:39:29.179033 systemd[1]: Detected virtualization vmware. Sep 12 17:39:29.179040 systemd[1]: Detected architecture x86-64. Sep 12 17:39:29.179046 systemd[1]: Detected first boot. Sep 12 17:39:29.179053 systemd[1]: Initializing machine ID from random generator. Sep 12 17:39:29.179061 zram_generator::config[1084]: No configuration found. Sep 12 17:39:29.179069 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:39:29.179077 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 17:39:29.179084 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 12 17:39:29.179091 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:39:29.179098 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:39:29.179105 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:39:29.179113 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:39:29.179120 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:39:29.179127 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:39:29.179134 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:39:29.179141 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:39:29.179147 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:39:29.179154 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:39:29.179162 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:39:29.179169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:39:29.179176 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:39:29.179182 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:39:29.179189 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:39:29.179196 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:39:29.179203 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:39:29.179209 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:39:29.179217 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:39:29.179225 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:39:29.179234 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:39:29.179241 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:39:29.179247 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:39:29.179255 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:39:29.179284 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:39:29.179293 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:39:29.179302 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:39:29.179309 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:39:29.179316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:39:29.179323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:39:29.179330 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:39:29.179339 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:39:29.179346 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:39:29.179353 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:39:29.179360 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:39:29.179367 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:39:29.179374 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:39:29.179381 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:39:29.179388 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:39:29.179397 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 12 17:39:29.179404 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:39:29.179411 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:39:29.179418 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:39:29.179425 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:39:29.179433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:39:29.179440 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:39:29.179447 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:39:29.179455 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:39:29.179464 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 17:39:29.179471 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 17:39:29.179478 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:39:29.179485 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:39:29.179492 kernel: loop: module loaded Sep 12 17:39:29.179498 kernel: ACPI: bus type drm_connector registered Sep 12 17:39:29.179504 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:39:29.179511 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:39:29.179520 kernel: fuse: init (API version 7.39) Sep 12 17:39:29.179526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:39:29.179534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:39:29.179555 systemd-journald[1200]: Collecting audit messages is disabled. Sep 12 17:39:29.179573 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:39:29.179581 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:39:29.179589 systemd-journald[1200]: Journal started Sep 12 17:39:29.179604 systemd-journald[1200]: Runtime Journal (/run/log/journal/995c008b6e254322be7d816e2fdf2002) is 4.8M, max 38.7M, 33.8M free. Sep 12 17:39:29.179996 jq[1161]: true Sep 12 17:39:29.181283 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:39:29.183685 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:39:29.184460 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:39:29.184659 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:39:29.184831 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:39:29.185149 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:39:29.190589 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:39:29.191482 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:39:29.191575 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:39:29.191841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:39:29.191921 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:39:29.192790 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:39:29.192901 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:39:29.193570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:39:29.193681 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:39:29.193877 jq[1216]: true Sep 12 17:39:29.194424 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:39:29.194512 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:39:29.194781 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:39:29.194883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:39:29.195144 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:39:29.195520 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:39:29.195788 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:39:29.212564 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:39:29.218369 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:39:29.231595 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:39:29.231765 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:39:29.238154 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:39:29.244406 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:39:29.244576 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:39:29.252487 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:39:29.252700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:39:29.258522 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:39:29.259476 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:39:29.263851 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:39:29.264019 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:39:29.270545 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:39:29.270985 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:39:29.272928 systemd-journald[1200]: Time spent on flushing to /var/log/journal/995c008b6e254322be7d816e2fdf2002 is 20.915ms for 1826 entries. Sep 12 17:39:29.272928 systemd-journald[1200]: System Journal (/var/log/journal/995c008b6e254322be7d816e2fdf2002) is 8.0M, max 584.8M, 576.8M free. Sep 12 17:39:29.318060 systemd-journald[1200]: Received client request to flush runtime journal. Sep 12 17:39:29.318972 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:39:29.327757 ignition[1218]: Ignition 2.19.0 Sep 12 17:39:29.350072 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:39:29.327965 ignition[1218]: deleting config from guestinfo properties Sep 12 17:39:29.373310 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:39:29.375373 ignition[1218]: Successfully deleted config Sep 12 17:39:29.379210 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 12 17:39:29.381777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:39:29.385016 udevadm[1263]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:39:29.387422 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 12 17:39:29.387435 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 12 17:39:29.390785 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:39:29.404437 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:39:29.425111 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:39:29.431429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:39:29.440350 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 12 17:39:29.440363 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 12 17:39:29.443179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:39:29.963571 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:39:29.969435 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:39:29.985991 systemd-udevd[1280]: Using default interface naming scheme 'v255'. Sep 12 17:39:30.007207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:39:30.015395 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:39:30.031386 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:39:30.059170 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 17:39:30.082070 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:39:30.092348 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:39:30.100269 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:39:30.143201 systemd-networkd[1285]: lo: Link UP Sep 12 17:39:30.143207 systemd-networkd[1285]: lo: Gained carrier Sep 12 17:39:30.144035 systemd-networkd[1285]: Enumeration completed Sep 12 17:39:30.144108 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:39:30.146209 systemd-networkd[1285]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 12 17:39:30.148532 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 12 17:39:30.148663 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 12 17:39:30.149035 systemd-networkd[1285]: ens192: Link UP Sep 12 17:39:30.149179 systemd-networkd[1285]: ens192: Gained carrier Sep 12 17:39:30.151403 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:39:30.169387 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1293) Sep 12 17:39:30.191314 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 12 17:39:30.195275 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 12 17:39:30.205275 kernel: Guest personality initialized and is active Sep 12 17:39:30.210551 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:39:30.210591 kernel: Initialized host personality Sep 12 17:39:30.222308 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 12 17:39:30.233298 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:39:30.240991 (udev-worker)[1289]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 12 17:39:30.249441 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:39:30.264479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:39:30.270944 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:39:30.274341 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:39:30.290944 lvm[1321]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:39:30.317903 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:39:30.318177 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:39:30.325468 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:39:30.328589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:39:30.338170 lvm[1328]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:39:30.360814 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:39:30.361085 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:39:30.361229 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:39:30.361244 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:39:30.361378 systemd[1]: Reached target machines.target - Containers. Sep 12 17:39:30.362158 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:39:30.367414 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:39:30.370365 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:39:30.370633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:39:30.372371 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:39:30.374405 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:39:30.377411 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:39:30.378012 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:39:30.390430 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:39:30.398276 kernel: loop0: detected capacity change from 0 to 2976 Sep 12 17:39:30.435407 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:39:30.435936 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:39:30.560281 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:39:30.596279 kernel: loop1: detected capacity change from 0 to 140768 Sep 12 17:39:30.642307 kernel: loop2: detected capacity change from 0 to 142488 Sep 12 17:39:30.678289 kernel: loop3: detected capacity change from 0 to 221472 Sep 12 17:39:30.720354 kernel: loop4: detected capacity change from 0 to 2976 Sep 12 17:39:30.831280 kernel: loop5: detected capacity change from 0 to 140768 Sep 12 17:39:30.859327 kernel: loop6: detected capacity change from 0 to 142488 Sep 12 17:39:30.874276 kernel: loop7: detected capacity change from 0 to 221472 Sep 12 17:39:30.890485 (sd-merge)[1351]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 12 17:39:30.890797 (sd-merge)[1351]: Merged extensions into '/usr'. Sep 12 17:39:30.894008 systemd[1]: Reloading requested from client PID 1338 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:39:30.894018 systemd[1]: Reloading... Sep 12 17:39:30.936281 zram_generator::config[1379]: No configuration found. Sep 12 17:39:31.014143 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 17:39:31.029268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:31.065820 systemd[1]: Reloading finished in 171 ms. Sep 12 17:39:31.072323 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:39:31.077404 systemd[1]: Starting ensure-sysext.service... Sep 12 17:39:31.078291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:39:31.082233 systemd[1]: Reloading requested from client PID 1440 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:39:31.082239 systemd[1]: Reloading... Sep 12 17:39:31.091196 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:39:31.091428 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:39:31.091945 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:39:31.092112 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Sep 12 17:39:31.092149 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Sep 12 17:39:31.113294 zram_generator::config[1469]: No configuration found. Sep 12 17:39:31.125744 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:39:31.125751 systemd-tmpfiles[1441]: Skipping /boot Sep 12 17:39:31.131145 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:39:31.131427 systemd-tmpfiles[1441]: Skipping /boot Sep 12 17:39:31.186626 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 17:39:31.206084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:31.253820 systemd[1]: Reloading finished in 171 ms. Sep 12 17:39:31.267921 ldconfig[1334]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:39:31.267672 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:39:31.274678 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:39:31.281337 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:39:31.283406 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:39:31.287785 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:39:31.291419 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:39:31.292680 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:39:31.293994 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:39:31.299471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:39:31.307471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:39:31.312919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:39:31.313224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:39:31.313313 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:39:31.314559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:39:31.314679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:39:31.323302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:39:31.335351 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:39:31.335533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:39:31.335630 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:39:31.336242 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:39:31.336654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:39:31.336748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:39:31.337130 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:39:31.337209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:39:31.340673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:39:31.343450 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:39:31.348600 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:39:31.352466 augenrules[1572]: No rules Sep 12 17:39:31.363467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:39:31.364242 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:39:31.369388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:39:31.370183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:39:31.370495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:39:31.376426 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:39:31.376557 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:39:31.377330 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:39:31.377695 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:39:31.378111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:39:31.378198 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:39:31.380771 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:39:31.380858 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:39:31.381179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:39:31.381268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:39:31.382918 systemd[1]: Finished ensure-sysext.service. Sep 12 17:39:31.385847 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:39:31.385940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:39:31.390401 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:39:31.390447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:39:31.394532 systemd-resolved[1540]: Positive Trust Anchors: Sep 12 17:39:31.394622 systemd-resolved[1540]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:39:31.394646 systemd-resolved[1540]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:39:31.395510 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:39:31.399021 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:39:31.417714 systemd-resolved[1540]: Defaulting to hostname 'linux'. Sep 12 17:39:31.418966 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:39:31.419188 systemd[1]: Reached target network.target - Network. Sep 12 17:39:31.419335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:39:31.434139 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:39:31.434370 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:39:31.511347 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:39:31.511851 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:39:31.511878 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:39:31.512059 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:39:31.512202 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:39:31.512434 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:39:31.512593 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:39:31.512753 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:39:31.512886 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:39:31.512912 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:39:31.513010 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:39:31.513914 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:39:31.515129 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:39:31.516165 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:39:31.521085 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:39:31.521242 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:39:31.521358 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:39:31.521553 systemd[1]: System is tainted: cgroupsv1 Sep 12 17:39:31.521574 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:39:31.521589 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:39:31.524127 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:39:31.526200 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:39:31.530335 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:39:31.531385 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:39:31.531513 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:39:31.541468 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:39:31.542642 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:39:31.546358 jq[1609]: false Sep 12 17:39:31.549605 dbus-daemon[1607]: [system] SELinux support is enabled Sep 12 17:39:31.549385 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:39:31.556487 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:41:04.520081 systemd-resolved[1540]: Clock change detected. Flushing caches. Sep 12 17:41:04.520152 systemd-timesyncd[1598]: Contacted time server 172.233.177.198:123 (0.flatcar.pool.ntp.org). Sep 12 17:41:04.520194 systemd-timesyncd[1598]: Initial clock synchronization to Fri 2025-09-12 17:41:04.520050 UTC. Sep 12 17:41:04.527500 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:41:04.528180 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:41:04.531706 extend-filesystems[1610]: Found loop4 Sep 12 17:41:04.532053 extend-filesystems[1610]: Found loop5 Sep 12 17:41:04.532200 extend-filesystems[1610]: Found loop6 Sep 12 17:41:04.532327 extend-filesystems[1610]: Found loop7 Sep 12 17:41:04.532451 extend-filesystems[1610]: Found sda Sep 12 17:41:04.532753 extend-filesystems[1610]: Found sda1 Sep 12 17:41:04.532753 extend-filesystems[1610]: Found sda2 Sep 12 17:41:04.532753 extend-filesystems[1610]: Found sda3 Sep 12 17:41:04.532753 extend-filesystems[1610]: Found usr Sep 12 17:41:04.534861 extend-filesystems[1610]: Found sda4 Sep 12 17:41:04.534861 extend-filesystems[1610]: Found sda6 Sep 12 17:41:04.534861 extend-filesystems[1610]: Found sda7 Sep 12 17:41:04.534861 extend-filesystems[1610]: Found sda9 Sep 12 17:41:04.534861 extend-filesystems[1610]: Checking size of /dev/sda9 Sep 12 17:41:04.533315 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:41:04.544997 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:41:04.548159 update_engine[1622]: I20250912 17:41:04.548105 1622 main.cc:92] Flatcar Update Engine starting Sep 12 17:41:04.554793 update_engine[1622]: I20250912 17:41:04.550066 1622 update_check_scheduler.cc:74] Next update check in 6m32s Sep 12 17:41:04.552509 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 12 17:41:04.552924 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:41:04.554925 jq[1624]: true Sep 12 17:41:04.559262 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:41:04.559412 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:41:04.559572 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:41:04.559711 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:41:04.565230 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:41:04.565374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:41:04.572943 extend-filesystems[1610]: Old size kept for /dev/sda9 Sep 12 17:41:04.572943 extend-filesystems[1610]: Found sr0 Sep 12 17:41:04.573407 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:41:04.573573 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:41:04.580544 jq[1640]: true Sep 12 17:41:04.587590 (ntainerd)[1641]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:41:04.589756 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:41:04.589789 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:41:04.593152 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:41:04.593169 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:41:04.593523 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:41:04.596178 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:41:04.602824 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:41:04.605168 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 12 17:41:04.609853 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 12 17:41:04.626695 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1282) Sep 12 17:41:04.627831 tar[1639]: linux-amd64/helm Sep 12 17:41:04.653468 systemd-logind[1619]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:41:04.655298 systemd-logind[1619]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:41:04.655435 systemd-logind[1619]: New seat seat0. Sep 12 17:41:04.662845 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 12 17:41:04.663145 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:41:04.689926 unknown[1664]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 12 17:41:04.692503 bash[1680]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:41:04.693586 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:41:04.694735 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:41:04.700976 unknown[1664]: Core dump limit set to -1 Sep 12 17:41:04.712751 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:41:04.765903 locksmithd[1659]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:41:04.848990 sshd_keygen[1649]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:41:04.873806 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:41:04.880864 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:41:04.888803 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:41:04.888953 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:41:04.895828 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:41:04.911792 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:41:04.921450 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:41:04.923240 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:41:04.923953 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:41:04.956098 containerd[1641]: time="2025-09-12T17:41:04.955997980Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:41:04.985042 containerd[1641]: time="2025-09-12T17:41:04.984825438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:04.985955 containerd[1641]: time="2025-09-12T17:41:04.985816267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:04.985955 containerd[1641]: time="2025-09-12T17:41:04.985835264Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:41:04.985955 containerd[1641]: time="2025-09-12T17:41:04.985845253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:41:04.985955 containerd[1641]: time="2025-09-12T17:41:04.985938090Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:41:04.985955 containerd[1641]: time="2025-09-12T17:41:04.985948349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986060 containerd[1641]: time="2025-09-12T17:41:04.985992363Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986060 containerd[1641]: time="2025-09-12T17:41:04.986001114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986283 containerd[1641]: time="2025-09-12T17:41:04.986136563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986283 containerd[1641]: time="2025-09-12T17:41:04.986152389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986283 containerd[1641]: time="2025-09-12T17:41:04.986165333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986283 containerd[1641]: time="2025-09-12T17:41:04.986173533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986283 containerd[1641]: time="2025-09-12T17:41:04.986217632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986565 containerd[1641]: time="2025-09-12T17:41:04.986368127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986565 containerd[1641]: time="2025-09-12T17:41:04.986457695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:41:04.986565 containerd[1641]: time="2025-09-12T17:41:04.986466970Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:41:04.986565 containerd[1641]: time="2025-09-12T17:41:04.986513869Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:41:04.986565 containerd[1641]: time="2025-09-12T17:41:04.986544056Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:41:04.987821 systemd-networkd[1285]: ens192: Gained IPv6LL Sep 12 17:41:04.989157 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:41:04.989563 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:41:04.994889 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 12 17:41:05.001812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:05.003796 containerd[1641]: time="2025-09-12T17:41:05.003774873Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:41:05.003840 containerd[1641]: time="2025-09-12T17:41:05.003812692Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:41:05.003840 containerd[1641]: time="2025-09-12T17:41:05.003823697Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:41:05.003840 containerd[1641]: time="2025-09-12T17:41:05.003833003Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:41:05.003895 containerd[1641]: time="2025-09-12T17:41:05.003843113Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:41:05.003940 containerd[1641]: time="2025-09-12T17:41:05.003929157Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:41:05.004116 containerd[1641]: time="2025-09-12T17:41:05.004104707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:41:05.004174 containerd[1641]: time="2025-09-12T17:41:05.004163040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:41:05.004196 containerd[1641]: time="2025-09-12T17:41:05.004175084Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:41:05.004196 containerd[1641]: time="2025-09-12T17:41:05.004184016Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:41:05.004196 containerd[1641]: time="2025-09-12T17:41:05.004191678Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:41:05.004256 containerd[1641]: time="2025-09-12T17:41:05.004198950Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:41:05.004256 containerd[1641]: time="2025-09-12T17:41:05.004205696Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:41:05.004256 containerd[1641]: time="2025-09-12T17:41:05.004213963Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:41:05.004256 containerd[1641]: time="2025-09-12T17:41:05.004222029Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:41:05.004256 containerd[1641]: time="2025-09-12T17:41:05.004229110Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:41:05.004256 containerd[1641]: time="2025-09-12T17:41:05.004236136Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:41:05.004256 containerd[1641]: time="2025-09-12T17:41:05.004242485Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004257278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004265362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004272365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004279866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004286324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004293632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004300676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004307560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004314854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004324692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004331914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004338811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004345502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004353702Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:41:05.004548 containerd[1641]: time="2025-09-12T17:41:05.004366850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004373528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004379660Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004404933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004415455Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004421574Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004428217Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004433641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004440477Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004448710Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:41:05.005125 containerd[1641]: time="2025-09-12T17:41:05.004454242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:41:05.005405 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.004666252Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.004701687Z" level=info msg="Connect containerd service" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.004725384Z" level=info msg="using legacy CRI server" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.004729745Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.004779953Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.005055765Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.007206380Z" level=info msg="Start subscribing containerd event" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.007236176Z" level=info msg="Start recovering state" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.007280508Z" level=info msg="Start event monitor" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.007288660Z" level=info msg="Start snapshots syncer" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.007294465Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.007302017Z" level=info msg="Start streaming server" Sep 12 17:41:05.007899 containerd[1641]: time="2025-09-12T17:41:05.007550915Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:41:05.013978 containerd[1641]: time="2025-09-12T17:41:05.008688423Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:41:05.013978 containerd[1641]: time="2025-09-12T17:41:05.010172138Z" level=info msg="containerd successfully booted in 0.054957s" Sep 12 17:41:05.009752 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:41:05.056146 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:41:05.057213 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:41:05.057374 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 12 17:41:05.059918 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:41:05.146503 tar[1639]: linux-amd64/LICENSE Sep 12 17:41:05.146503 tar[1639]: linux-amd64/README.md Sep 12 17:41:05.158039 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:41:06.400384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:06.400761 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:41:06.401277 systemd[1]: Startup finished in 6.770s (kernel) + 5.069s (userspace) = 11.839s. Sep 12 17:41:06.404419 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:06.485699 login[1732]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:41:06.488123 login[1735]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:41:06.497261 systemd-logind[1619]: New session 2 of user core. Sep 12 17:41:06.497487 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:41:06.503889 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:41:06.507964 systemd-logind[1619]: New session 1 of user core. Sep 12 17:41:06.515871 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:41:06.523043 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:41:06.527466 (systemd)[1821]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:41:06.694200 systemd[1821]: Queued start job for default target default.target. Sep 12 17:41:06.694411 systemd[1821]: Created slice app.slice - User Application Slice. Sep 12 17:41:06.694424 systemd[1821]: Reached target paths.target - Paths. Sep 12 17:41:06.694433 systemd[1821]: Reached target timers.target - Timers. Sep 12 17:41:06.702724 systemd[1821]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:41:06.706777 systemd[1821]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:41:06.706812 systemd[1821]: Reached target sockets.target - Sockets. Sep 12 17:41:06.706821 systemd[1821]: Reached target basic.target - Basic System. Sep 12 17:41:06.706842 systemd[1821]: Reached target default.target - Main User Target. Sep 12 17:41:06.706858 systemd[1821]: Startup finished in 175ms. Sep 12 17:41:06.707729 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:41:06.710305 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:41:06.711645 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:41:07.766108 kubelet[1813]: E0912 17:41:07.766065 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:07.767253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:07.767345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:12.873446 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:41:12.881798 systemd[1]: Started sshd@0-139.178.70.102:22-185.156.73.233:46252.service - OpenSSH per-connection server daemon (185.156.73.233:46252). Sep 12 17:41:14.949334 sshd[1863]: Connection closed by authenticating user root 185.156.73.233 port 46252 [preauth] Sep 12 17:41:14.950694 systemd[1]: sshd@0-139.178.70.102:22-185.156.73.233:46252.service: Deactivated successfully. Sep 12 17:41:17.949030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:41:17.958839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:18.307770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:18.307913 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:18.388904 kubelet[1880]: E0912 17:41:18.388859 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:18.391551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:18.391791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:28.449125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:41:28.458776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:28.792771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:28.794791 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:28.826269 kubelet[1901]: E0912 17:41:28.826233 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:28.827480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:28.827573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:34.821879 systemd[1]: Started sshd@1-139.178.70.102:22-139.178.89.65:39498.service - OpenSSH per-connection server daemon (139.178.89.65:39498). Sep 12 17:41:34.847617 sshd[1908]: Accepted publickey for core from 139.178.89.65 port 39498 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:41:34.848545 sshd[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:34.851695 systemd-logind[1619]: New session 3 of user core. Sep 12 17:41:34.861913 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:41:34.913125 systemd[1]: Started sshd@2-139.178.70.102:22-139.178.89.65:39506.service - OpenSSH per-connection server daemon (139.178.89.65:39506). Sep 12 17:41:34.941841 sshd[1913]: Accepted publickey for core from 139.178.89.65 port 39506 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:41:34.942924 sshd[1913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:34.946057 systemd-logind[1619]: New session 4 of user core. Sep 12 17:41:34.950794 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:41:34.999755 sshd[1913]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:35.004826 systemd[1]: Started sshd@3-139.178.70.102:22-139.178.89.65:39512.service - OpenSSH per-connection server daemon (139.178.89.65:39512). Sep 12 17:41:35.005076 systemd[1]: sshd@2-139.178.70.102:22-139.178.89.65:39506.service: Deactivated successfully. Sep 12 17:41:35.008296 systemd-logind[1619]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:41:35.008745 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:41:35.009816 systemd-logind[1619]: Removed session 4. Sep 12 17:41:35.031938 sshd[1918]: Accepted publickey for core from 139.178.89.65 port 39512 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:41:35.032216 sshd[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:35.034805 systemd-logind[1619]: New session 5 of user core. Sep 12 17:41:35.042873 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:41:35.089691 sshd[1918]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:35.096009 systemd[1]: Started sshd@4-139.178.70.102:22-139.178.89.65:39522.service - OpenSSH per-connection server daemon (139.178.89.65:39522). Sep 12 17:41:35.096288 systemd[1]: sshd@3-139.178.70.102:22-139.178.89.65:39512.service: Deactivated successfully. Sep 12 17:41:35.099045 systemd-logind[1619]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:41:35.099522 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:41:35.101066 systemd-logind[1619]: Removed session 5. Sep 12 17:41:35.119412 sshd[1926]: Accepted publickey for core from 139.178.89.65 port 39522 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:41:35.120265 sshd[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:35.123448 systemd-logind[1619]: New session 6 of user core. Sep 12 17:41:35.128841 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:41:35.178855 sshd[1926]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:35.186841 systemd[1]: Started sshd@5-139.178.70.102:22-139.178.89.65:39528.service - OpenSSH per-connection server daemon (139.178.89.65:39528). Sep 12 17:41:35.188947 systemd[1]: sshd@4-139.178.70.102:22-139.178.89.65:39522.service: Deactivated successfully. Sep 12 17:41:35.189819 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:41:35.191118 systemd-logind[1619]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:41:35.192166 systemd-logind[1619]: Removed session 6. Sep 12 17:41:35.212924 sshd[1934]: Accepted publickey for core from 139.178.89.65 port 39528 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:41:35.213839 sshd[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:35.216756 systemd-logind[1619]: New session 7 of user core. Sep 12 17:41:35.222856 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:41:35.308515 sudo[1941]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:41:35.308767 sudo[1941]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:35.318453 sudo[1941]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:35.319811 sshd[1934]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:35.326104 systemd[1]: Started sshd@6-139.178.70.102:22-139.178.89.65:39530.service - OpenSSH per-connection server daemon (139.178.89.65:39530). Sep 12 17:41:35.326441 systemd[1]: sshd@5-139.178.70.102:22-139.178.89.65:39528.service: Deactivated successfully. Sep 12 17:41:35.327526 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:41:35.329011 systemd-logind[1619]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:41:35.331116 systemd-logind[1619]: Removed session 7. Sep 12 17:41:35.356407 sshd[1943]: Accepted publickey for core from 139.178.89.65 port 39530 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:41:35.357145 sshd[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:35.360253 systemd-logind[1619]: New session 8 of user core. Sep 12 17:41:35.365797 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:41:35.414237 sudo[1951]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:41:35.414608 sudo[1951]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:35.416613 sudo[1951]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:35.419778 sudo[1950]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:41:35.419946 sudo[1950]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:35.427798 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:41:35.429119 auditctl[1954]: No rules Sep 12 17:41:35.429618 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:41:35.429768 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:41:35.432923 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:41:35.447332 augenrules[1973]: No rules Sep 12 17:41:35.448247 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:41:35.449000 sudo[1950]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:35.450771 sshd[1943]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:35.462815 systemd[1]: Started sshd@7-139.178.70.102:22-139.178.89.65:39540.service - OpenSSH per-connection server daemon (139.178.89.65:39540). Sep 12 17:41:35.463104 systemd[1]: sshd@6-139.178.70.102:22-139.178.89.65:39530.service: Deactivated successfully. Sep 12 17:41:35.463917 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:41:35.465457 systemd-logind[1619]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:41:35.466917 systemd-logind[1619]: Removed session 8. Sep 12 17:41:35.485188 sshd[1979]: Accepted publickey for core from 139.178.89.65 port 39540 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:41:35.485945 sshd[1979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:35.488204 systemd-logind[1619]: New session 9 of user core. Sep 12 17:41:35.502878 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:41:35.552758 sudo[1986]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:41:35.553199 sudo[1986]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:41:35.833963 (dockerd)[2002]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:41:35.834282 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:41:36.131963 dockerd[2002]: time="2025-09-12T17:41:36.131632834Z" level=info msg="Starting up" Sep 12 17:41:36.192283 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2260789841-merged.mount: Deactivated successfully. Sep 12 17:41:36.452312 dockerd[2002]: time="2025-09-12T17:41:36.452284367Z" level=info msg="Loading containers: start." Sep 12 17:41:36.551666 kernel: Initializing XFRM netlink socket Sep 12 17:41:36.600354 systemd-networkd[1285]: docker0: Link UP Sep 12 17:41:36.608542 dockerd[2002]: time="2025-09-12T17:41:36.608513991Z" level=info msg="Loading containers: done." Sep 12 17:41:36.617087 dockerd[2002]: time="2025-09-12T17:41:36.617055689Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:41:36.617165 dockerd[2002]: time="2025-09-12T17:41:36.617130307Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:41:36.617212 dockerd[2002]: time="2025-09-12T17:41:36.617199766Z" level=info msg="Daemon has completed initialization" Sep 12 17:41:36.632540 dockerd[2002]: time="2025-09-12T17:41:36.632494454Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:41:36.634723 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:41:37.393138 containerd[1641]: time="2025-09-12T17:41:37.393114563Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:41:37.920466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051619024.mount: Deactivated successfully. Sep 12 17:41:38.732755 containerd[1641]: time="2025-09-12T17:41:38.732087007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:38.732755 containerd[1641]: time="2025-09-12T17:41:38.732594525Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:41:38.733469 containerd[1641]: time="2025-09-12T17:41:38.733450472Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:38.735223 containerd[1641]: time="2025-09-12T17:41:38.735204893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:38.735930 containerd[1641]: time="2025-09-12T17:41:38.735908344Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.342770586s" Sep 12 17:41:38.735977 containerd[1641]: time="2025-09-12T17:41:38.735932747Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:41:38.736347 containerd[1641]: time="2025-09-12T17:41:38.736325343Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:41:38.949025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:41:38.963842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:39.109753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:39.112336 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:41:39.137500 kubelet[2209]: E0912 17:41:39.137439 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:41:39.138496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:41:39.138592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:41:40.331730 containerd[1641]: time="2025-09-12T17:41:40.331180481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:40.331730 containerd[1641]: time="2025-09-12T17:41:40.331658493Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:41:40.332097 containerd[1641]: time="2025-09-12T17:41:40.332047582Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:40.334545 containerd[1641]: time="2025-09-12T17:41:40.334524522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:40.335302 containerd[1641]: time="2025-09-12T17:41:40.335282897Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.598937877s" Sep 12 17:41:40.335443 containerd[1641]: time="2025-09-12T17:41:40.335360916Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:41:40.335863 containerd[1641]: time="2025-09-12T17:41:40.335704426Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:41:41.340795 containerd[1641]: time="2025-09-12T17:41:41.339922499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:41.345876 containerd[1641]: time="2025-09-12T17:41:41.345773734Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:41:41.346191 containerd[1641]: time="2025-09-12T17:41:41.346165378Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:41.348720 containerd[1641]: time="2025-09-12T17:41:41.348696489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:41.349725 containerd[1641]: time="2025-09-12T17:41:41.349701969Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.01397858s" Sep 12 17:41:41.349805 containerd[1641]: time="2025-09-12T17:41:41.349790997Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:41:41.350179 containerd[1641]: time="2025-09-12T17:41:41.350158815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:41:42.258126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941185978.mount: Deactivated successfully. Sep 12 17:41:42.752173 containerd[1641]: time="2025-09-12T17:41:42.751627644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:42.757537 containerd[1641]: time="2025-09-12T17:41:42.757499359Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:41:42.763066 containerd[1641]: time="2025-09-12T17:41:42.763039633Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:42.769505 containerd[1641]: time="2025-09-12T17:41:42.769482649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:42.769906 containerd[1641]: time="2025-09-12T17:41:42.769881173Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.41969928s" Sep 12 17:41:42.769954 containerd[1641]: time="2025-09-12T17:41:42.769908697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:41:42.770355 containerd[1641]: time="2025-09-12T17:41:42.770334770Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:41:43.245300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034264949.mount: Deactivated successfully. Sep 12 17:41:44.344552 containerd[1641]: time="2025-09-12T17:41:44.344515328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:44.351557 containerd[1641]: time="2025-09-12T17:41:44.351520916Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:41:44.356177 containerd[1641]: time="2025-09-12T17:41:44.356140646Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:44.359698 containerd[1641]: time="2025-09-12T17:41:44.359667622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:44.360285 containerd[1641]: time="2025-09-12T17:41:44.360203005Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.589846819s" Sep 12 17:41:44.360285 containerd[1641]: time="2025-09-12T17:41:44.360226614Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:41:44.360994 containerd[1641]: time="2025-09-12T17:41:44.360871067Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:41:44.890786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482273903.mount: Deactivated successfully. Sep 12 17:41:44.931234 containerd[1641]: time="2025-09-12T17:41:44.931187397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:44.931677 containerd[1641]: time="2025-09-12T17:41:44.931635748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:41:44.932794 containerd[1641]: time="2025-09-12T17:41:44.931922050Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:44.933377 containerd[1641]: time="2025-09-12T17:41:44.933355799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:44.934123 containerd[1641]: time="2025-09-12T17:41:44.934105532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.205867ms" Sep 12 17:41:44.934207 containerd[1641]: time="2025-09-12T17:41:44.934194141Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:41:44.934557 containerd[1641]: time="2025-09-12T17:41:44.934532176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:41:45.510217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624672769.mount: Deactivated successfully. Sep 12 17:41:47.079667 containerd[1641]: time="2025-09-12T17:41:47.079606972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:47.085471 containerd[1641]: time="2025-09-12T17:41:47.085425298Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:41:47.090593 containerd[1641]: time="2025-09-12T17:41:47.090559570Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:47.401539 containerd[1641]: time="2025-09-12T17:41:47.401448498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:47.402172 containerd[1641]: time="2025-09-12T17:41:47.402080200Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.467434176s" Sep 12 17:41:47.402172 containerd[1641]: time="2025-09-12T17:41:47.402097270Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:41:49.198935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:41:49.207774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:49.532129 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:41:49.532191 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:41:49.532361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:49.539069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:49.558816 systemd[1]: Reloading requested from client PID 2379 ('systemctl') (unit session-9.scope)... Sep 12 17:41:49.558825 systemd[1]: Reloading... Sep 12 17:41:49.569716 update_engine[1622]: I20250912 17:41:49.569677 1622 update_attempter.cc:509] Updating boot flags... Sep 12 17:41:49.624786 zram_generator::config[2427]: No configuration found. Sep 12 17:41:49.634663 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2433) Sep 12 17:41:49.699428 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 17:41:49.717434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:49.749684 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2423) Sep 12 17:41:49.777613 systemd[1]: Reloading finished in 218 ms. Sep 12 17:41:49.821758 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:41:49.821805 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:41:49.821958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:49.835952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:50.232406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:50.234869 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:41:50.324839 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:50.325507 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:41:50.325507 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:50.325507 kubelet[2512]: I0912 17:41:50.324933 2512 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:41:50.730682 kubelet[2512]: I0912 17:41:50.730585 2512 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:41:50.730682 kubelet[2512]: I0912 17:41:50.730605 2512 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:41:50.731656 kubelet[2512]: I0912 17:41:50.730949 2512 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:41:50.773531 kubelet[2512]: I0912 17:41:50.773508 2512 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:41:50.775278 kubelet[2512]: E0912 17:41:50.775263 2512 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:50.787873 kubelet[2512]: E0912 17:41:50.787836 2512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:41:50.787873 kubelet[2512]: I0912 17:41:50.787871 2512 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:41:50.791997 kubelet[2512]: I0912 17:41:50.791979 2512 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:41:50.795890 kubelet[2512]: I0912 17:41:50.795867 2512 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:41:50.796011 kubelet[2512]: I0912 17:41:50.795986 2512 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:41:50.796127 kubelet[2512]: I0912 17:41:50.796009 2512 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:41:50.799390 kubelet[2512]: I0912 17:41:50.799370 2512 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:41:50.799390 kubelet[2512]: I0912 17:41:50.799388 2512 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:41:50.799469 kubelet[2512]: I0912 17:41:50.799458 2512 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:50.803396 kubelet[2512]: I0912 17:41:50.803374 2512 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:41:50.803439 kubelet[2512]: I0912 17:41:50.803401 2512 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:41:50.804446 kubelet[2512]: I0912 17:41:50.804432 2512 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:41:50.804474 kubelet[2512]: I0912 17:41:50.804449 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:41:50.807193 kubelet[2512]: W0912 17:41:50.806775 2512 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Sep 12 17:41:50.807193 kubelet[2512]: E0912 17:41:50.806821 2512 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:50.807193 kubelet[2512]: W0912 17:41:50.807021 2512 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Sep 12 17:41:50.807193 kubelet[2512]: E0912 17:41:50.807043 2512 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:50.807193 kubelet[2512]: I0912 17:41:50.807086 2512 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:41:50.810856 kubelet[2512]: I0912 17:41:50.810842 2512 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:41:50.811761 kubelet[2512]: W0912 17:41:50.811552 2512 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:41:50.812296 kubelet[2512]: I0912 17:41:50.812288 2512 server.go:1274] "Started kubelet" Sep 12 17:41:50.813218 kubelet[2512]: I0912 17:41:50.813209 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:41:50.819773 kubelet[2512]: I0912 17:41:50.819702 2512 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:41:50.821580 kubelet[2512]: I0912 17:41:50.821286 2512 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:41:50.821875 kubelet[2512]: E0912 17:41:50.818275 2512 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186499d8c10e6623 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:41:50.812268067 +0000 UTC m=+0.575096567,LastTimestamp:2025-09-12 17:41:50.812268067 +0000 UTC m=+0.575096567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:41:50.822096 kubelet[2512]: I0912 17:41:50.822077 2512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:41:50.822251 kubelet[2512]: I0912 17:41:50.822244 2512 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:41:50.822851 kubelet[2512]: I0912 17:41:50.822841 2512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:41:50.824695 kubelet[2512]: I0912 17:41:50.824686 2512 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:41:50.824820 kubelet[2512]: E0912 17:41:50.824811 2512 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:50.825448 kubelet[2512]: I0912 17:41:50.825433 2512 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:41:50.825557 kubelet[2512]: I0912 17:41:50.825550 2512 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:41:50.827344 kubelet[2512]: W0912 17:41:50.827320 2512 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Sep 12 17:41:50.827429 kubelet[2512]: E0912 17:41:50.827412 2512 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:50.827520 kubelet[2512]: E0912 17:41:50.827505 2512 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="200ms" Sep 12 17:41:50.832572 kubelet[2512]: I0912 17:41:50.832545 2512 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:41:50.836367 kubelet[2512]: I0912 17:41:50.836296 2512 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:41:50.836367 kubelet[2512]: I0912 17:41:50.836311 2512 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:41:50.840400 kubelet[2512]: I0912 17:41:50.839791 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:41:50.840511 kubelet[2512]: I0912 17:41:50.840503 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:41:50.840553 kubelet[2512]: I0912 17:41:50.840549 2512 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:41:50.840604 kubelet[2512]: I0912 17:41:50.840599 2512 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:41:50.840699 kubelet[2512]: E0912 17:41:50.840688 2512 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:41:50.843117 kubelet[2512]: E0912 17:41:50.843103 2512 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:41:50.845107 kubelet[2512]: W0912 17:41:50.845081 2512 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Sep 12 17:41:50.845181 kubelet[2512]: E0912 17:41:50.845168 2512 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:50.859797 kubelet[2512]: I0912 17:41:50.859780 2512 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:41:50.859797 kubelet[2512]: I0912 17:41:50.859791 2512 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:41:50.859797 kubelet[2512]: I0912 17:41:50.859801 2512 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:50.861064 kubelet[2512]: I0912 17:41:50.861052 2512 policy_none.go:49] "None policy: Start" Sep 12 17:41:50.861424 kubelet[2512]: I0912 17:41:50.861410 2512 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:41:50.861424 kubelet[2512]: I0912 17:41:50.861424 2512 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:41:50.866659 kubelet[2512]: I0912 17:41:50.865865 2512 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:41:50.866659 kubelet[2512]: I0912 17:41:50.866003 2512 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:41:50.866659 kubelet[2512]: I0912 17:41:50.866011 2512 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:41:50.867141 kubelet[2512]: I0912 17:41:50.867132 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:41:50.867926 kubelet[2512]: E0912 17:41:50.867917 2512 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:41:50.968330 kubelet[2512]: I0912 17:41:50.968306 2512 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:41:50.968535 kubelet[2512]: E0912 17:41:50.968507 2512 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Sep 12 17:41:51.027258 kubelet[2512]: I0912 17:41:51.027190 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49cb4e0f663568cc274a4019142f781c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49cb4e0f663568cc274a4019142f781c\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:51.027258 kubelet[2512]: I0912 17:41:51.027224 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:51.027258 kubelet[2512]: I0912 17:41:51.027244 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:51.027258 kubelet[2512]: I0912 17:41:51.027261 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:51.027950 kubelet[2512]: I0912 17:41:51.027277 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49cb4e0f663568cc274a4019142f781c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49cb4e0f663568cc274a4019142f781c\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:51.027950 kubelet[2512]: I0912 17:41:51.027290 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49cb4e0f663568cc274a4019142f781c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49cb4e0f663568cc274a4019142f781c\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:51.027950 kubelet[2512]: I0912 17:41:51.027303 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:51.027950 kubelet[2512]: I0912 17:41:51.027318 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:51.027950 kubelet[2512]: I0912 17:41:51.027333 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:51.028171 kubelet[2512]: E0912 17:41:51.028132 2512 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="400ms" Sep 12 17:41:51.170456 kubelet[2512]: I0912 17:41:51.170370 2512 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:41:51.170691 kubelet[2512]: E0912 17:41:51.170673 2512 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Sep 12 17:41:51.249906 containerd[1641]: time="2025-09-12T17:41:51.249757463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49cb4e0f663568cc274a4019142f781c,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:51.250204 containerd[1641]: time="2025-09-12T17:41:51.250049136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:51.251475 containerd[1641]: time="2025-09-12T17:41:51.251250786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:51.429519 kubelet[2512]: E0912 17:41:51.429445 2512 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="800ms" Sep 12 17:41:51.571802 kubelet[2512]: I0912 17:41:51.571779 2512 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:41:51.572029 kubelet[2512]: E0912 17:41:51.572011 2512 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Sep 12 17:41:51.643838 kubelet[2512]: W0912 17:41:51.643757 2512 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Sep 12 17:41:51.643838 kubelet[2512]: E0912 17:41:51.643813 2512 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:51.678405 kubelet[2512]: W0912 17:41:51.678318 2512 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Sep 12 17:41:51.678405 kubelet[2512]: E0912 17:41:51.678380 2512 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:51.741254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173159295.mount: Deactivated successfully. Sep 12 17:41:51.744814 containerd[1641]: time="2025-09-12T17:41:51.744781866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:51.745774 containerd[1641]: time="2025-09-12T17:41:51.745752687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:51.746439 containerd[1641]: time="2025-09-12T17:41:51.746273716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:41:51.747054 containerd[1641]: time="2025-09-12T17:41:51.746988161Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:51.747509 containerd[1641]: time="2025-09-12T17:41:51.747479784Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:51.747859 containerd[1641]: time="2025-09-12T17:41:51.747751217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:41:51.748173 containerd[1641]: time="2025-09-12T17:41:51.748153174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:41:51.750953 containerd[1641]: time="2025-09-12T17:41:51.750934068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:51.751511 containerd[1641]: time="2025-09-12T17:41:51.751361197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.072743ms" Sep 12 17:41:51.752661 containerd[1641]: time="2025-09-12T17:41:51.752239922Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.156176ms" Sep 12 17:41:51.784606 containerd[1641]: time="2025-09-12T17:41:51.784572838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 534.748956ms" Sep 12 17:41:51.901374 containerd[1641]: time="2025-09-12T17:41:51.901179846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:51.901374 containerd[1641]: time="2025-09-12T17:41:51.901211908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:51.901374 containerd[1641]: time="2025-09-12T17:41:51.901221731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:51.901374 containerd[1641]: time="2025-09-12T17:41:51.901275680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:51.904249 containerd[1641]: time="2025-09-12T17:41:51.904181759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:51.904249 containerd[1641]: time="2025-09-12T17:41:51.904230499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:51.904687 containerd[1641]: time="2025-09-12T17:41:51.904661760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:51.904800 containerd[1641]: time="2025-09-12T17:41:51.904777809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:51.916621 containerd[1641]: time="2025-09-12T17:41:51.916467185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:51.916621 containerd[1641]: time="2025-09-12T17:41:51.916501675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:51.916621 containerd[1641]: time="2025-09-12T17:41:51.916511732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:51.916621 containerd[1641]: time="2025-09-12T17:41:51.916558295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:52.060311 containerd[1641]: time="2025-09-12T17:41:52.060213301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"722b8fd7dcc5cc7fb1fd624681372b67293ddd0c5ef41185156b931be8b98b3c\"" Sep 12 17:41:52.061261 containerd[1641]: time="2025-09-12T17:41:52.061119287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49cb4e0f663568cc274a4019142f781c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e65c1274f975f0994567369abc9478f6806a15d3a085e70cd616b6027a5426c\"" Sep 12 17:41:52.061442 containerd[1641]: time="2025-09-12T17:41:52.061429319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfb01fafd34e3fb6340483acf18110727158911b58baa0541925d3e978c43ea1\"" Sep 12 17:41:52.064316 containerd[1641]: time="2025-09-12T17:41:52.064251992Z" level=info msg="CreateContainer within sandbox \"8e65c1274f975f0994567369abc9478f6806a15d3a085e70cd616b6027a5426c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:41:52.064316 containerd[1641]: time="2025-09-12T17:41:52.064285199Z" level=info msg="CreateContainer within sandbox \"cfb01fafd34e3fb6340483acf18110727158911b58baa0541925d3e978c43ea1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:41:52.064724 containerd[1641]: time="2025-09-12T17:41:52.064600934Z" level=info msg="CreateContainer within sandbox \"722b8fd7dcc5cc7fb1fd624681372b67293ddd0c5ef41185156b931be8b98b3c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:41:52.230531 kubelet[2512]: E0912 17:41:52.230501 2512 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="1.6s" Sep 12 17:41:52.278123 kubelet[2512]: W0912 17:41:52.278078 2512 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Sep 12 17:41:52.278123 kubelet[2512]: E0912 17:41:52.278126 2512 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:52.289811 kubelet[2512]: W0912 17:41:52.289762 2512 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Sep 12 17:41:52.289885 kubelet[2512]: E0912 17:41:52.289819 2512 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:52.313438 containerd[1641]: time="2025-09-12T17:41:52.313359513Z" level=info msg="CreateContainer within sandbox \"cfb01fafd34e3fb6340483acf18110727158911b58baa0541925d3e978c43ea1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5165eb86c51d2591145951f22db1bbc24d6ac16ceac401a76db4b62ee81752b6\"" Sep 12 17:41:52.316808 containerd[1641]: time="2025-09-12T17:41:52.316783374Z" level=info msg="CreateContainer within sandbox \"8e65c1274f975f0994567369abc9478f6806a15d3a085e70cd616b6027a5426c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4e3de15f79fd29076915b9f68919407dc3cafb7db8b30391bad7aa3e6bbfa8e\"" Sep 12 17:41:52.316985 containerd[1641]: time="2025-09-12T17:41:52.316967012Z" level=info msg="StartContainer for \"5165eb86c51d2591145951f22db1bbc24d6ac16ceac401a76db4b62ee81752b6\"" Sep 12 17:41:52.319459 containerd[1641]: time="2025-09-12T17:41:52.318747102Z" level=info msg="CreateContainer within sandbox \"722b8fd7dcc5cc7fb1fd624681372b67293ddd0c5ef41185156b931be8b98b3c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9048c115b8c3a5aae9d8864117192877c6f819230ef4422d2308fc535bc4794a\"" Sep 12 17:41:52.319459 containerd[1641]: time="2025-09-12T17:41:52.318848497Z" level=info msg="StartContainer for \"f4e3de15f79fd29076915b9f68919407dc3cafb7db8b30391bad7aa3e6bbfa8e\"" Sep 12 17:41:52.325000 containerd[1641]: time="2025-09-12T17:41:52.324974812Z" level=info msg="StartContainer for \"9048c115b8c3a5aae9d8864117192877c6f819230ef4422d2308fc535bc4794a\"" Sep 12 17:41:52.374912 kubelet[2512]: I0912 17:41:52.374890 2512 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:41:52.375092 kubelet[2512]: E0912 17:41:52.375075 2512 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Sep 12 17:41:52.392688 containerd[1641]: time="2025-09-12T17:41:52.392632240Z" level=info msg="StartContainer for \"f4e3de15f79fd29076915b9f68919407dc3cafb7db8b30391bad7aa3e6bbfa8e\" returns successfully" Sep 12 17:41:52.399493 containerd[1641]: time="2025-09-12T17:41:52.399424335Z" level=info msg="StartContainer for \"5165eb86c51d2591145951f22db1bbc24d6ac16ceac401a76db4b62ee81752b6\" returns successfully" Sep 12 17:41:52.415537 containerd[1641]: time="2025-09-12T17:41:52.415439790Z" level=info msg="StartContainer for \"9048c115b8c3a5aae9d8864117192877c6f819230ef4422d2308fc535bc4794a\" returns successfully" Sep 12 17:41:52.920673 kubelet[2512]: E0912 17:41:52.920632 2512 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:41:53.977179 kubelet[2512]: I0912 17:41:53.977134 2512 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:41:54.544737 kubelet[2512]: E0912 17:41:54.544709 2512 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:41:54.743909 kubelet[2512]: I0912 17:41:54.743608 2512 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:41:54.743909 kubelet[2512]: E0912 17:41:54.743673 2512 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:41:54.774388 kubelet[2512]: E0912 17:41:54.774364 2512 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:55.386639 kubelet[2512]: E0912 17:41:55.386460 2512 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:55.809669 kubelet[2512]: I0912 17:41:55.809516 2512 apiserver.go:52] "Watching apiserver" Sep 12 17:41:55.825641 kubelet[2512]: I0912 17:41:55.825608 2512 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:41:57.405532 systemd[1]: Reloading requested from client PID 2781 ('systemctl') (unit session-9.scope)... Sep 12 17:41:57.405544 systemd[1]: Reloading... Sep 12 17:41:57.462704 zram_generator::config[2819]: No configuration found. Sep 12 17:41:57.565369 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 17:41:57.582323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:57.637146 systemd[1]: Reloading finished in 231 ms. Sep 12 17:41:57.673181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:57.690591 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:41:57.691125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:57.699120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:58.024739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:58.029255 (kubelet)[2895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:41:58.094857 kubelet[2895]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:58.094857 kubelet[2895]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:41:58.094857 kubelet[2895]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:58.095214 kubelet[2895]: I0912 17:41:58.094935 2895 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:41:58.106816 kubelet[2895]: I0912 17:41:58.106658 2895 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:41:58.106816 kubelet[2895]: I0912 17:41:58.106681 2895 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:41:58.107150 kubelet[2895]: I0912 17:41:58.107140 2895 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:41:58.119620 kubelet[2895]: I0912 17:41:58.119590 2895 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:41:58.125878 kubelet[2895]: I0912 17:41:58.125729 2895 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:41:58.136844 kubelet[2895]: E0912 17:41:58.136826 2895 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:41:58.136965 kubelet[2895]: I0912 17:41:58.136958 2895 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:41:58.143629 kubelet[2895]: I0912 17:41:58.143613 2895 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:41:58.145259 kubelet[2895]: I0912 17:41:58.143978 2895 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:41:58.145259 kubelet[2895]: I0912 17:41:58.144045 2895 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:41:58.145259 kubelet[2895]: I0912 17:41:58.144061 2895 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:41:58.145259 kubelet[2895]: I0912 17:41:58.144238 2895 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:41:58.145424 kubelet[2895]: I0912 17:41:58.144246 2895 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:41:58.145424 kubelet[2895]: I0912 17:41:58.144266 2895 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:58.145424 kubelet[2895]: I0912 17:41:58.144324 2895 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:41:58.145424 kubelet[2895]: I0912 17:41:58.144331 2895 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:41:58.145424 kubelet[2895]: I0912 17:41:58.144355 2895 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:41:58.145424 kubelet[2895]: I0912 17:41:58.144362 2895 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:41:58.145956 kubelet[2895]: I0912 17:41:58.145942 2895 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:41:58.146364 kubelet[2895]: I0912 17:41:58.146353 2895 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:41:58.147163 kubelet[2895]: I0912 17:41:58.147069 2895 server.go:1274] "Started kubelet" Sep 12 17:41:58.151082 kubelet[2895]: I0912 17:41:58.151060 2895 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:41:58.156950 kubelet[2895]: I0912 17:41:58.156076 2895 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:41:58.157043 kubelet[2895]: I0912 17:41:58.157033 2895 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:41:58.158546 kubelet[2895]: I0912 17:41:58.158233 2895 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:41:58.158546 kubelet[2895]: I0912 17:41:58.158389 2895 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:41:58.158627 kubelet[2895]: I0912 17:41:58.158560 2895 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:41:58.161685 kubelet[2895]: I0912 17:41:58.161669 2895 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:41:58.161864 kubelet[2895]: E0912 17:41:58.161851 2895 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:58.168392 kubelet[2895]: I0912 17:41:58.168343 2895 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:41:58.174842 kubelet[2895]: I0912 17:41:58.174822 2895 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:41:58.174926 kubelet[2895]: I0912 17:41:58.174911 2895 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:41:58.176742 kubelet[2895]: E0912 17:41:58.176717 2895 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:41:58.177803 kubelet[2895]: I0912 17:41:58.177157 2895 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:41:58.177910 kubelet[2895]: I0912 17:41:58.177903 2895 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:41:58.177976 kubelet[2895]: I0912 17:41:58.177778 2895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:41:58.179148 kubelet[2895]: I0912 17:41:58.178744 2895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:41:58.179148 kubelet[2895]: I0912 17:41:58.178758 2895 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:41:58.179148 kubelet[2895]: I0912 17:41:58.178769 2895 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:41:58.179148 kubelet[2895]: E0912 17:41:58.178796 2895 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:41:58.253267 kubelet[2895]: I0912 17:41:58.253245 2895 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:41:58.253267 kubelet[2895]: I0912 17:41:58.253257 2895 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:41:58.253267 kubelet[2895]: I0912 17:41:58.253268 2895 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:58.253476 kubelet[2895]: I0912 17:41:58.253384 2895 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:41:58.253476 kubelet[2895]: I0912 17:41:58.253394 2895 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:41:58.253476 kubelet[2895]: I0912 17:41:58.253411 2895 policy_none.go:49] "None policy: Start" Sep 12 17:41:58.253999 kubelet[2895]: I0912 17:41:58.253986 2895 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:41:58.254041 kubelet[2895]: I0912 17:41:58.254008 2895 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:41:58.254110 kubelet[2895]: I0912 17:41:58.254097 2895 state_mem.go:75] "Updated machine memory state" Sep 12 17:41:58.273643 kubelet[2895]: I0912 17:41:58.273614 2895 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:41:58.273954 kubelet[2895]: I0912 17:41:58.273821 2895 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:41:58.273954 kubelet[2895]: I0912 17:41:58.273834 2895 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:41:58.274012 kubelet[2895]: I0912 17:41:58.273965 2895 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:41:58.304682 kubelet[2895]: E0912 17:41:58.304338 2895 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:58.376739 kubelet[2895]: I0912 17:41:58.376719 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49cb4e0f663568cc274a4019142f781c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49cb4e0f663568cc274a4019142f781c\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:58.376957 kubelet[2895]: I0912 17:41:58.376851 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49cb4e0f663568cc274a4019142f781c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49cb4e0f663568cc274a4019142f781c\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:58.376957 kubelet[2895]: I0912 17:41:58.376873 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49cb4e0f663568cc274a4019142f781c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49cb4e0f663568cc274a4019142f781c\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:58.376957 kubelet[2895]: I0912 17:41:58.376890 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:58.376957 kubelet[2895]: I0912 17:41:58.376899 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:58.377211 kubelet[2895]: I0912 17:41:58.377148 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:58.377211 kubelet[2895]: I0912 17:41:58.377163 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:58.377211 kubelet[2895]: I0912 17:41:58.377171 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:58.377211 kubelet[2895]: I0912 17:41:58.377186 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:58.377478 kubelet[2895]: I0912 17:41:58.377334 2895 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:41:58.405742 kubelet[2895]: I0912 17:41:58.405723 2895 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 17:41:58.405982 kubelet[2895]: I0912 17:41:58.405881 2895 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:41:59.157682 kubelet[2895]: I0912 17:41:59.157135 2895 apiserver.go:52] "Watching apiserver" Sep 12 17:41:59.175237 kubelet[2895]: I0912 17:41:59.175167 2895 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:41:59.264184 kubelet[2895]: E0912 17:41:59.264160 2895 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:59.294214 kubelet[2895]: I0912 17:41:59.294078 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.2940644070000005 podStartE2EDuration="4.294064407s" podCreationTimestamp="2025-09-12 17:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:59.29406483 +0000 UTC m=+1.244743861" watchObservedRunningTime="2025-09-12 17:41:59.294064407 +0000 UTC m=+1.244743439" Sep 12 17:41:59.328978 kubelet[2895]: I0912 17:41:59.328840 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.32882406 podStartE2EDuration="1.32882406s" podCreationTimestamp="2025-09-12 17:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:59.328515938 +0000 UTC m=+1.279194980" watchObservedRunningTime="2025-09-12 17:41:59.32882406 +0000 UTC m=+1.279503098" Sep 12 17:41:59.359255 kubelet[2895]: I0912 17:41:59.359064 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.359051153 podStartE2EDuration="1.359051153s" podCreationTimestamp="2025-09-12 17:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:59.358989369 +0000 UTC m=+1.309668407" watchObservedRunningTime="2025-09-12 17:41:59.359051153 +0000 UTC m=+1.309730185" Sep 12 17:42:02.194950 kubelet[2895]: I0912 17:42:02.194912 2895 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:42:02.195781 kubelet[2895]: I0912 17:42:02.195329 2895 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:42:02.195821 containerd[1641]: time="2025-09-12T17:42:02.195175196Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:42:02.905729 kubelet[2895]: I0912 17:42:02.905610 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92557bb9-c82b-4830-b4da-b3e812579e87-kube-proxy\") pod \"kube-proxy-kj2jq\" (UID: \"92557bb9-c82b-4830-b4da-b3e812579e87\") " pod="kube-system/kube-proxy-kj2jq" Sep 12 17:42:02.905729 kubelet[2895]: I0912 17:42:02.905661 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfnd7\" (UniqueName: \"kubernetes.io/projected/92557bb9-c82b-4830-b4da-b3e812579e87-kube-api-access-vfnd7\") pod \"kube-proxy-kj2jq\" (UID: \"92557bb9-c82b-4830-b4da-b3e812579e87\") " pod="kube-system/kube-proxy-kj2jq" Sep 12 17:42:02.905729 kubelet[2895]: I0912 17:42:02.905677 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92557bb9-c82b-4830-b4da-b3e812579e87-xtables-lock\") pod \"kube-proxy-kj2jq\" (UID: \"92557bb9-c82b-4830-b4da-b3e812579e87\") " pod="kube-system/kube-proxy-kj2jq" Sep 12 17:42:02.905729 kubelet[2895]: I0912 17:42:02.905687 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92557bb9-c82b-4830-b4da-b3e812579e87-lib-modules\") pod \"kube-proxy-kj2jq\" (UID: \"92557bb9-c82b-4830-b4da-b3e812579e87\") " pod="kube-system/kube-proxy-kj2jq" Sep 12 17:42:03.172376 containerd[1641]: time="2025-09-12T17:42:03.172199010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kj2jq,Uid:92557bb9-c82b-4830-b4da-b3e812579e87,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:03.211789 containerd[1641]: time="2025-09-12T17:42:03.211588201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:03.211789 containerd[1641]: time="2025-09-12T17:42:03.211660172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:03.212474 containerd[1641]: time="2025-09-12T17:42:03.211678117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:03.212474 containerd[1641]: time="2025-09-12T17:42:03.211767970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:03.261764 containerd[1641]: time="2025-09-12T17:42:03.261714530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kj2jq,Uid:92557bb9-c82b-4830-b4da-b3e812579e87,Namespace:kube-system,Attempt:0,} returns sandbox id \"65041d237b221d4f638c5dc71224798c6b9a08082bc95b6b2d83c7f4695545e0\"" Sep 12 17:42:03.263334 containerd[1641]: time="2025-09-12T17:42:03.263310563Z" level=info msg="CreateContainer within sandbox \"65041d237b221d4f638c5dc71224798c6b9a08082bc95b6b2d83c7f4695545e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:42:03.273785 containerd[1641]: time="2025-09-12T17:42:03.273753336Z" level=info msg="CreateContainer within sandbox \"65041d237b221d4f638c5dc71224798c6b9a08082bc95b6b2d83c7f4695545e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"861fd1c1e489fe879e51246eedbfd6f8424eda469416193de82a5a94d98f86ee\"" Sep 12 17:42:03.278185 containerd[1641]: time="2025-09-12T17:42:03.274773396Z" level=info msg="StartContainer for \"861fd1c1e489fe879e51246eedbfd6f8424eda469416193de82a5a94d98f86ee\"" Sep 12 17:42:03.309094 kubelet[2895]: I0912 17:42:03.309077 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7jh4\" (UniqueName: \"kubernetes.io/projected/67a22e9f-27f5-4837-9666-fb8d1f918836-kube-api-access-l7jh4\") pod \"tigera-operator-58fc44c59b-kmhzh\" (UID: \"67a22e9f-27f5-4837-9666-fb8d1f918836\") " pod="tigera-operator/tigera-operator-58fc44c59b-kmhzh" Sep 12 17:42:03.309605 kubelet[2895]: I0912 17:42:03.309242 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67a22e9f-27f5-4837-9666-fb8d1f918836-var-lib-calico\") pod \"tigera-operator-58fc44c59b-kmhzh\" (UID: \"67a22e9f-27f5-4837-9666-fb8d1f918836\") " pod="tigera-operator/tigera-operator-58fc44c59b-kmhzh" Sep 12 17:42:03.328793 containerd[1641]: time="2025-09-12T17:42:03.328770287Z" level=info msg="StartContainer for \"861fd1c1e489fe879e51246eedbfd6f8424eda469416193de82a5a94d98f86ee\" returns successfully" Sep 12 17:42:03.560330 containerd[1641]: time="2025-09-12T17:42:03.560303684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-kmhzh,Uid:67a22e9f-27f5-4837-9666-fb8d1f918836,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:42:03.601778 containerd[1641]: time="2025-09-12T17:42:03.601635049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:03.601778 containerd[1641]: time="2025-09-12T17:42:03.601701912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:03.601778 containerd[1641]: time="2025-09-12T17:42:03.601712130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:03.602013 containerd[1641]: time="2025-09-12T17:42:03.601857370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:03.647206 containerd[1641]: time="2025-09-12T17:42:03.647179970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-kmhzh,Uid:67a22e9f-27f5-4837-9666-fb8d1f918836,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"86ad9b17f1800222e717ae46d7e1e71c91d642a150a8d3cdfc98c124272d85d6\"" Sep 12 17:42:03.648623 containerd[1641]: time="2025-09-12T17:42:03.648603571Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:42:04.251987 kubelet[2895]: I0912 17:42:04.251950 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kj2jq" podStartSLOduration=2.251935258 podStartE2EDuration="2.251935258s" podCreationTimestamp="2025-09-12 17:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:04.251724198 +0000 UTC m=+6.202403237" watchObservedRunningTime="2025-09-12 17:42:04.251935258 +0000 UTC m=+6.202614291" Sep 12 17:42:04.974095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839569553.mount: Deactivated successfully. Sep 12 17:42:05.339192 containerd[1641]: time="2025-09-12T17:42:05.339112332Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:05.339698 containerd[1641]: time="2025-09-12T17:42:05.339664562Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:42:05.340116 containerd[1641]: time="2025-09-12T17:42:05.339889656Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:05.341758 containerd[1641]: time="2025-09-12T17:42:05.341740920Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:05.341825 containerd[1641]: time="2025-09-12T17:42:05.341811310Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.693189732s" Sep 12 17:42:05.341870 containerd[1641]: time="2025-09-12T17:42:05.341860646Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:42:05.343234 containerd[1641]: time="2025-09-12T17:42:05.343221136Z" level=info msg="CreateContainer within sandbox \"86ad9b17f1800222e717ae46d7e1e71c91d642a150a8d3cdfc98c124272d85d6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:42:05.348113 containerd[1641]: time="2025-09-12T17:42:05.348091941Z" level=info msg="CreateContainer within sandbox \"86ad9b17f1800222e717ae46d7e1e71c91d642a150a8d3cdfc98c124272d85d6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9611d2f5fb4f9e48b92defd567d472c75761503f52ca94ddf9d2f653d462afcf\"" Sep 12 17:42:05.348635 containerd[1641]: time="2025-09-12T17:42:05.348624792Z" level=info msg="StartContainer for \"9611d2f5fb4f9e48b92defd567d472c75761503f52ca94ddf9d2f653d462afcf\"" Sep 12 17:42:05.384936 containerd[1641]: time="2025-09-12T17:42:05.384864121Z" level=info msg="StartContainer for \"9611d2f5fb4f9e48b92defd567d472c75761503f52ca94ddf9d2f653d462afcf\" returns successfully" Sep 12 17:42:09.383282 kubelet[2895]: I0912 17:42:09.382924 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-kmhzh" podStartSLOduration=4.68847266 podStartE2EDuration="6.382913163s" podCreationTimestamp="2025-09-12 17:42:03 +0000 UTC" firstStartedPulling="2025-09-12 17:42:03.647863808 +0000 UTC m=+5.598542838" lastFinishedPulling="2025-09-12 17:42:05.34230431 +0000 UTC m=+7.292983341" observedRunningTime="2025-09-12 17:42:06.261909403 +0000 UTC m=+8.212588442" watchObservedRunningTime="2025-09-12 17:42:09.382913163 +0000 UTC m=+11.333592202" Sep 12 17:42:10.454162 sudo[1986]: pam_unix(sudo:session): session closed for user root Sep 12 17:42:10.456848 sshd[1979]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:10.459762 systemd[1]: sshd@7-139.178.70.102:22-139.178.89.65:39540.service: Deactivated successfully. Sep 12 17:42:10.470749 systemd-logind[1619]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:42:10.471884 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:42:10.476137 systemd-logind[1619]: Removed session 9. Sep 12 17:42:13.170772 kubelet[2895]: I0912 17:42:13.170740 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/499c0c59-a3c2-4d41-b47e-bc31d87a9287-tigera-ca-bundle\") pod \"calico-typha-76665cf9bb-2j9dd\" (UID: \"499c0c59-a3c2-4d41-b47e-bc31d87a9287\") " pod="calico-system/calico-typha-76665cf9bb-2j9dd" Sep 12 17:42:13.170772 kubelet[2895]: I0912 17:42:13.170769 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cz5b\" (UniqueName: \"kubernetes.io/projected/499c0c59-a3c2-4d41-b47e-bc31d87a9287-kube-api-access-2cz5b\") pod \"calico-typha-76665cf9bb-2j9dd\" (UID: \"499c0c59-a3c2-4d41-b47e-bc31d87a9287\") " pod="calico-system/calico-typha-76665cf9bb-2j9dd" Sep 12 17:42:13.171095 kubelet[2895]: I0912 17:42:13.170788 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/499c0c59-a3c2-4d41-b47e-bc31d87a9287-typha-certs\") pod \"calico-typha-76665cf9bb-2j9dd\" (UID: \"499c0c59-a3c2-4d41-b47e-bc31d87a9287\") " pod="calico-system/calico-typha-76665cf9bb-2j9dd" Sep 12 17:42:13.437562 containerd[1641]: time="2025-09-12T17:42:13.437484401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76665cf9bb-2j9dd,Uid:499c0c59-a3c2-4d41-b47e-bc31d87a9287,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:13.524102 containerd[1641]: time="2025-09-12T17:42:13.522440180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:13.524102 containerd[1641]: time="2025-09-12T17:42:13.522515691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:13.524102 containerd[1641]: time="2025-09-12T17:42:13.522545059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:13.524102 containerd[1641]: time="2025-09-12T17:42:13.522639667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:13.574496 kubelet[2895]: I0912 17:42:13.573890 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-var-lib-calico\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.576986 kubelet[2895]: I0912 17:42:13.576728 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-lib-modules\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.576986 kubelet[2895]: I0912 17:42:13.576771 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-var-run-calico\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.576986 kubelet[2895]: I0912 17:42:13.576788 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-cni-bin-dir\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.576986 kubelet[2895]: I0912 17:42:13.576798 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-cni-log-dir\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.576986 kubelet[2895]: I0912 17:42:13.576808 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dggtv\" (UniqueName: \"kubernetes.io/projected/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-kube-api-access-dggtv\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.577133 kubelet[2895]: I0912 17:42:13.576857 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-cni-net-dir\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.577133 kubelet[2895]: I0912 17:42:13.576878 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-tigera-ca-bundle\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.577133 kubelet[2895]: I0912 17:42:13.576891 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-xtables-lock\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.577133 kubelet[2895]: I0912 17:42:13.576903 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-policysync\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.577133 kubelet[2895]: I0912 17:42:13.576943 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-flexvol-driver-host\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.577217 kubelet[2895]: I0912 17:42:13.576952 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678-node-certs\") pod \"calico-node-28p47\" (UID: \"cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678\") " pod="calico-system/calico-node-28p47" Sep 12 17:42:13.647777 containerd[1641]: time="2025-09-12T17:42:13.647326869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76665cf9bb-2j9dd,Uid:499c0c59-a3c2-4d41-b47e-bc31d87a9287,Namespace:calico-system,Attempt:0,} returns sandbox id \"e639f8e9c5e07026ed561ce8aed0b424545479a412d579cca9caf82573d98127\"" Sep 12 17:42:13.716453 containerd[1641]: time="2025-09-12T17:42:13.716416179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:42:13.756692 kubelet[2895]: E0912 17:42:13.756073 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vqsr" podUID="6c2d1714-1041-45d2-9888-c6e010910454" Sep 12 17:42:13.782301 kubelet[2895]: E0912 17:42:13.782152 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.782301 kubelet[2895]: W0912 17:42:13.782178 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.782301 kubelet[2895]: E0912 17:42:13.782194 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786550 kubelet[2895]: E0912 17:42:13.782556 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786550 kubelet[2895]: W0912 17:42:13.782562 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786550 kubelet[2895]: E0912 17:42:13.782568 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786550 kubelet[2895]: E0912 17:42:13.782696 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786550 kubelet[2895]: W0912 17:42:13.782701 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786550 kubelet[2895]: E0912 17:42:13.782706 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786550 kubelet[2895]: E0912 17:42:13.782806 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786550 kubelet[2895]: W0912 17:42:13.782811 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786550 kubelet[2895]: E0912 17:42:13.782832 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786550 kubelet[2895]: E0912 17:42:13.783022 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786744 kubelet[2895]: W0912 17:42:13.783027 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786744 kubelet[2895]: E0912 17:42:13.783033 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786744 kubelet[2895]: E0912 17:42:13.783120 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786744 kubelet[2895]: W0912 17:42:13.783125 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786744 kubelet[2895]: E0912 17:42:13.783130 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786744 kubelet[2895]: E0912 17:42:13.783210 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786744 kubelet[2895]: W0912 17:42:13.783215 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786744 kubelet[2895]: E0912 17:42:13.783220 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786744 kubelet[2895]: E0912 17:42:13.783301 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786744 kubelet[2895]: W0912 17:42:13.783306 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786902 kubelet[2895]: E0912 17:42:13.783310 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786902 kubelet[2895]: E0912 17:42:13.783395 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786902 kubelet[2895]: W0912 17:42:13.783399 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786902 kubelet[2895]: E0912 17:42:13.783404 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786902 kubelet[2895]: E0912 17:42:13.783499 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786902 kubelet[2895]: W0912 17:42:13.783505 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786902 kubelet[2895]: E0912 17:42:13.783511 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.786902 kubelet[2895]: E0912 17:42:13.783595 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.786902 kubelet[2895]: W0912 17:42:13.783600 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.786902 kubelet[2895]: E0912 17:42:13.783604 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.794753 kubelet[2895]: E0912 17:42:13.783732 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.794753 kubelet[2895]: W0912 17:42:13.783737 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.794753 kubelet[2895]: E0912 17:42:13.783742 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.794753 kubelet[2895]: E0912 17:42:13.783829 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.794753 kubelet[2895]: W0912 17:42:13.783833 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.794753 kubelet[2895]: E0912 17:42:13.783839 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.794753 kubelet[2895]: E0912 17:42:13.783922 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.794753 kubelet[2895]: W0912 17:42:13.783926 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.794753 kubelet[2895]: E0912 17:42:13.783931 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.794753 kubelet[2895]: E0912 17:42:13.784017 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.794915 kubelet[2895]: W0912 17:42:13.784022 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.794915 kubelet[2895]: E0912 17:42:13.784028 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.794915 kubelet[2895]: E0912 17:42:13.784124 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.794915 kubelet[2895]: W0912 17:42:13.784130 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.794915 kubelet[2895]: E0912 17:42:13.784135 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.794915 kubelet[2895]: E0912 17:42:13.784230 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.794915 kubelet[2895]: W0912 17:42:13.784234 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.794915 kubelet[2895]: E0912 17:42:13.784239 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.794915 kubelet[2895]: E0912 17:42:13.784322 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.794915 kubelet[2895]: W0912 17:42:13.784327 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795075 kubelet[2895]: E0912 17:42:13.784331 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795075 kubelet[2895]: E0912 17:42:13.784417 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795075 kubelet[2895]: W0912 17:42:13.784421 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795075 kubelet[2895]: E0912 17:42:13.784426 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795075 kubelet[2895]: E0912 17:42:13.784505 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795075 kubelet[2895]: W0912 17:42:13.784509 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795075 kubelet[2895]: E0912 17:42:13.784514 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795075 kubelet[2895]: E0912 17:42:13.787140 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795075 kubelet[2895]: W0912 17:42:13.787146 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795075 kubelet[2895]: E0912 17:42:13.787156 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795235 kubelet[2895]: I0912 17:42:13.787173 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nx8\" (UniqueName: \"kubernetes.io/projected/6c2d1714-1041-45d2-9888-c6e010910454-kube-api-access-d4nx8\") pod \"csi-node-driver-6vqsr\" (UID: \"6c2d1714-1041-45d2-9888-c6e010910454\") " pod="calico-system/csi-node-driver-6vqsr" Sep 12 17:42:13.795235 kubelet[2895]: E0912 17:42:13.787267 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795235 kubelet[2895]: W0912 17:42:13.787273 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795235 kubelet[2895]: E0912 17:42:13.787282 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795235 kubelet[2895]: I0912 17:42:13.787290 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d1714-1041-45d2-9888-c6e010910454-registration-dir\") pod \"csi-node-driver-6vqsr\" (UID: \"6c2d1714-1041-45d2-9888-c6e010910454\") " pod="calico-system/csi-node-driver-6vqsr" Sep 12 17:42:13.795235 kubelet[2895]: E0912 17:42:13.787379 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795235 kubelet[2895]: W0912 17:42:13.787385 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795235 kubelet[2895]: E0912 17:42:13.787397 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795361 kubelet[2895]: I0912 17:42:13.787406 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d1714-1041-45d2-9888-c6e010910454-kubelet-dir\") pod \"csi-node-driver-6vqsr\" (UID: \"6c2d1714-1041-45d2-9888-c6e010910454\") " pod="calico-system/csi-node-driver-6vqsr" Sep 12 17:42:13.795361 kubelet[2895]: E0912 17:42:13.787505 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795361 kubelet[2895]: W0912 17:42:13.787517 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795361 kubelet[2895]: E0912 17:42:13.787525 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795361 kubelet[2895]: I0912 17:42:13.787533 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d1714-1041-45d2-9888-c6e010910454-socket-dir\") pod \"csi-node-driver-6vqsr\" (UID: \"6c2d1714-1041-45d2-9888-c6e010910454\") " pod="calico-system/csi-node-driver-6vqsr" Sep 12 17:42:13.795361 kubelet[2895]: E0912 17:42:13.787630 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795361 kubelet[2895]: W0912 17:42:13.787635 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795361 kubelet[2895]: E0912 17:42:13.787641 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795486 kubelet[2895]: I0912 17:42:13.787743 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6c2d1714-1041-45d2-9888-c6e010910454-varrun\") pod \"csi-node-driver-6vqsr\" (UID: \"6c2d1714-1041-45d2-9888-c6e010910454\") " pod="calico-system/csi-node-driver-6vqsr" Sep 12 17:42:13.795486 kubelet[2895]: E0912 17:42:13.787783 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795486 kubelet[2895]: W0912 17:42:13.787789 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795486 kubelet[2895]: E0912 17:42:13.787799 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795486 kubelet[2895]: E0912 17:42:13.787927 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795486 kubelet[2895]: W0912 17:42:13.787936 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795486 kubelet[2895]: E0912 17:42:13.787947 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795486 kubelet[2895]: E0912 17:42:13.788055 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795486 kubelet[2895]: W0912 17:42:13.788061 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795630 kubelet[2895]: E0912 17:42:13.788071 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795630 kubelet[2895]: E0912 17:42:13.788158 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795630 kubelet[2895]: W0912 17:42:13.788163 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795630 kubelet[2895]: E0912 17:42:13.788172 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795630 kubelet[2895]: E0912 17:42:13.788269 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795630 kubelet[2895]: W0912 17:42:13.788274 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795630 kubelet[2895]: E0912 17:42:13.788283 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795630 kubelet[2895]: E0912 17:42:13.788379 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795630 kubelet[2895]: W0912 17:42:13.788384 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795630 kubelet[2895]: E0912 17:42:13.788389 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795802 kubelet[2895]: E0912 17:42:13.788498 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795802 kubelet[2895]: W0912 17:42:13.788503 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795802 kubelet[2895]: E0912 17:42:13.788507 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795802 kubelet[2895]: E0912 17:42:13.788618 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795802 kubelet[2895]: W0912 17:42:13.788623 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795802 kubelet[2895]: E0912 17:42:13.788627 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795802 kubelet[2895]: E0912 17:42:13.788745 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.795802 kubelet[2895]: W0912 17:42:13.788750 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.795802 kubelet[2895]: E0912 17:42:13.788754 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.795802 kubelet[2895]: E0912 17:42:13.788857 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.796067 kubelet[2895]: W0912 17:42:13.788862 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.796067 kubelet[2895]: E0912 17:42:13.788866 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.805341 containerd[1641]: time="2025-09-12T17:42:13.805314617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-28p47,Uid:cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:13.875901 containerd[1641]: time="2025-09-12T17:42:13.874341317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:13.875901 containerd[1641]: time="2025-09-12T17:42:13.875848955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:13.876171 containerd[1641]: time="2025-09-12T17:42:13.875881315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:13.876278 containerd[1641]: time="2025-09-12T17:42:13.876155146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:13.888731 kubelet[2895]: E0912 17:42:13.888696 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.888731 kubelet[2895]: W0912 17:42:13.888715 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.888731 kubelet[2895]: E0912 17:42:13.888730 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.888988 kubelet[2895]: E0912 17:42:13.888938 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.888988 kubelet[2895]: W0912 17:42:13.888944 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.888988 kubelet[2895]: E0912 17:42:13.888950 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.889509 kubelet[2895]: E0912 17:42:13.889382 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.889509 kubelet[2895]: W0912 17:42:13.889391 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.889509 kubelet[2895]: E0912 17:42:13.889399 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.891390 kubelet[2895]: E0912 17:42:13.891360 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.891390 kubelet[2895]: W0912 17:42:13.891379 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.891390 kubelet[2895]: E0912 17:42:13.891399 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.892073 kubelet[2895]: E0912 17:42:13.891715 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.892073 kubelet[2895]: W0912 17:42:13.891726 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.892073 kubelet[2895]: E0912 17:42:13.891751 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.892751 kubelet[2895]: E0912 17:42:13.892503 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.892751 kubelet[2895]: W0912 17:42:13.892516 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.892751 kubelet[2895]: E0912 17:42:13.892548 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.892751 kubelet[2895]: E0912 17:42:13.892723 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.892751 kubelet[2895]: W0912 17:42:13.892731 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.893363 kubelet[2895]: E0912 17:42:13.893265 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.893889 kubelet[2895]: E0912 17:42:13.893445 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.893889 kubelet[2895]: W0912 17:42:13.893453 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.893889 kubelet[2895]: E0912 17:42:13.893478 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.893889 kubelet[2895]: E0912 17:42:13.893783 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.893889 kubelet[2895]: W0912 17:42:13.893791 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.893889 kubelet[2895]: E0912 17:42:13.893818 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.895245 kubelet[2895]: E0912 17:42:13.894839 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.895245 kubelet[2895]: W0912 17:42:13.894849 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.895245 kubelet[2895]: E0912 17:42:13.895125 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.895555 kubelet[2895]: E0912 17:42:13.895314 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.895555 kubelet[2895]: W0912 17:42:13.895322 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.895555 kubelet[2895]: E0912 17:42:13.895442 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.895933 kubelet[2895]: E0912 17:42:13.895725 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.895933 kubelet[2895]: W0912 17:42:13.895748 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.896616 kubelet[2895]: E0912 17:42:13.896294 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.896616 kubelet[2895]: W0912 17:42:13.896303 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.896616 kubelet[2895]: E0912 17:42:13.896465 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.896616 kubelet[2895]: W0912 17:42:13.896470 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.897297 kubelet[2895]: E0912 17:42:13.896960 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.897297 kubelet[2895]: W0912 17:42:13.896968 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.897297 kubelet[2895]: E0912 17:42:13.896986 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.897297 kubelet[2895]: E0912 17:42:13.897137 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.897297 kubelet[2895]: W0912 17:42:13.897142 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.897297 kubelet[2895]: E0912 17:42:13.897148 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.897676 kubelet[2895]: E0912 17:42:13.897373 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.898459 kubelet[2895]: E0912 17:42:13.898318 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.898459 kubelet[2895]: W0912 17:42:13.898326 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.898459 kubelet[2895]: E0912 17:42:13.898334 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.898459 kubelet[2895]: E0912 17:42:13.898433 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.898459 kubelet[2895]: E0912 17:42:13.898449 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.898824 kubelet[2895]: E0912 17:42:13.898603 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.898824 kubelet[2895]: W0912 17:42:13.898610 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.900782 kubelet[2895]: E0912 17:42:13.899465 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.900782 kubelet[2895]: E0912 17:42:13.899534 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.900782 kubelet[2895]: W0912 17:42:13.899542 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.900782 kubelet[2895]: E0912 17:42:13.899699 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.901221 kubelet[2895]: E0912 17:42:13.901031 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.901221 kubelet[2895]: W0912 17:42:13.901061 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.901221 kubelet[2895]: E0912 17:42:13.901084 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.901770 kubelet[2895]: E0912 17:42:13.901638 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.901770 kubelet[2895]: W0912 17:42:13.901659 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.902449 kubelet[2895]: E0912 17:42:13.902192 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.902815 kubelet[2895]: E0912 17:42:13.902805 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.903081 kubelet[2895]: W0912 17:42:13.902850 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.903081 kubelet[2895]: E0912 17:42:13.902887 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.903910 kubelet[2895]: E0912 17:42:13.903715 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.903910 kubelet[2895]: W0912 17:42:13.903727 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.903910 kubelet[2895]: E0912 17:42:13.903747 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.904600 kubelet[2895]: E0912 17:42:13.904386 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.904600 kubelet[2895]: W0912 17:42:13.904398 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.904600 kubelet[2895]: E0912 17:42:13.904411 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.905339 kubelet[2895]: E0912 17:42:13.905195 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.905339 kubelet[2895]: W0912 17:42:13.905207 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.905339 kubelet[2895]: E0912 17:42:13.905223 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.912302 kubelet[2895]: E0912 17:42:13.912273 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:13.912514 kubelet[2895]: W0912 17:42:13.912288 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:13.912514 kubelet[2895]: E0912 17:42:13.912446 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:13.928602 containerd[1641]: time="2025-09-12T17:42:13.928576585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-28p47,Uid:cfcb49ec-0d3e-4b72-b59d-8c3e7dfe0678,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecf7d5974b16f3140a535eb36316ec7f797ce23aec62d9aea3ecd09f453fd2aa\"" Sep 12 17:42:15.180134 kubelet[2895]: E0912 17:42:15.179804 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vqsr" podUID="6c2d1714-1041-45d2-9888-c6e010910454" Sep 12 17:42:15.366334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815845313.mount: Deactivated successfully. Sep 12 17:42:16.325851 containerd[1641]: time="2025-09-12T17:42:16.325816424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:16.328187 containerd[1641]: time="2025-09-12T17:42:16.328156379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:42:16.333582 containerd[1641]: time="2025-09-12T17:42:16.333549930Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:16.343739 containerd[1641]: time="2025-09-12T17:42:16.343697464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:16.344314 containerd[1641]: time="2025-09-12T17:42:16.344020304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.627077266s" Sep 12 17:42:16.344314 containerd[1641]: time="2025-09-12T17:42:16.344068604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:42:16.409581 containerd[1641]: time="2025-09-12T17:42:16.408661096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:42:16.463705 containerd[1641]: time="2025-09-12T17:42:16.463567792Z" level=info msg="CreateContainer within sandbox \"e639f8e9c5e07026ed561ce8aed0b424545479a412d579cca9caf82573d98127\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:42:16.513556 containerd[1641]: time="2025-09-12T17:42:16.513516002Z" level=info msg="CreateContainer within sandbox \"e639f8e9c5e07026ed561ce8aed0b424545479a412d579cca9caf82573d98127\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"15c736bc2337caa18cb4cdaef03be7d78848a4176fe3d22e4689d1c703c2592e\"" Sep 12 17:42:16.523498 containerd[1641]: time="2025-09-12T17:42:16.522362068Z" level=info msg="StartContainer for \"15c736bc2337caa18cb4cdaef03be7d78848a4176fe3d22e4689d1c703c2592e\"" Sep 12 17:42:16.593274 containerd[1641]: time="2025-09-12T17:42:16.593204148Z" level=info msg="StartContainer for \"15c736bc2337caa18cb4cdaef03be7d78848a4176fe3d22e4689d1c703c2592e\" returns successfully" Sep 12 17:42:17.195347 kubelet[2895]: E0912 17:42:17.195280 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vqsr" podUID="6c2d1714-1041-45d2-9888-c6e010910454" Sep 12 17:42:17.466314 kubelet[2895]: I0912 17:42:17.463273 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76665cf9bb-2j9dd" podStartSLOduration=1.810378526 podStartE2EDuration="4.462521211s" podCreationTimestamp="2025-09-12 17:42:13 +0000 UTC" firstStartedPulling="2025-09-12 17:42:13.715762356 +0000 UTC m=+15.666441385" lastFinishedPulling="2025-09-12 17:42:16.367905037 +0000 UTC m=+18.318584070" observedRunningTime="2025-09-12 17:42:17.460267971 +0000 UTC m=+19.410947002" watchObservedRunningTime="2025-09-12 17:42:17.462521211 +0000 UTC m=+19.413200253" Sep 12 17:42:17.518246 kubelet[2895]: E0912 17:42:17.518203 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.518246 kubelet[2895]: W0912 17:42:17.518234 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.518246 kubelet[2895]: E0912 17:42:17.518252 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.518438 kubelet[2895]: E0912 17:42:17.518423 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.518438 kubelet[2895]: W0912 17:42:17.518433 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.518623 kubelet[2895]: E0912 17:42:17.518441 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.518623 kubelet[2895]: E0912 17:42:17.518578 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.518623 kubelet[2895]: W0912 17:42:17.518583 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.518623 kubelet[2895]: E0912 17:42:17.518589 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.518786 kubelet[2895]: E0912 17:42:17.518739 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.518786 kubelet[2895]: W0912 17:42:17.518744 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.518786 kubelet[2895]: E0912 17:42:17.518750 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.518891 kubelet[2895]: E0912 17:42:17.518874 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.518891 kubelet[2895]: W0912 17:42:17.518883 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.518891 kubelet[2895]: E0912 17:42:17.518893 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.519937 kubelet[2895]: E0912 17:42:17.519024 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.519937 kubelet[2895]: W0912 17:42:17.519029 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.519937 kubelet[2895]: E0912 17:42:17.519034 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.519937 kubelet[2895]: E0912 17:42:17.519132 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.519937 kubelet[2895]: W0912 17:42:17.519137 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.519937 kubelet[2895]: E0912 17:42:17.519144 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.519937 kubelet[2895]: E0912 17:42:17.519269 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.519937 kubelet[2895]: W0912 17:42:17.519274 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.519937 kubelet[2895]: E0912 17:42:17.519279 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.519937 kubelet[2895]: E0912 17:42:17.519410 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.520439 kubelet[2895]: W0912 17:42:17.519415 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.520439 kubelet[2895]: E0912 17:42:17.519421 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.520439 kubelet[2895]: E0912 17:42:17.519554 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.520439 kubelet[2895]: W0912 17:42:17.519560 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.520439 kubelet[2895]: E0912 17:42:17.519566 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.520439 kubelet[2895]: E0912 17:42:17.519731 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.520439 kubelet[2895]: W0912 17:42:17.519738 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.520439 kubelet[2895]: E0912 17:42:17.519755 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.520439 kubelet[2895]: E0912 17:42:17.519878 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.520439 kubelet[2895]: W0912 17:42:17.519883 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.520742 kubelet[2895]: E0912 17:42:17.519888 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.520742 kubelet[2895]: E0912 17:42:17.520000 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.520742 kubelet[2895]: W0912 17:42:17.520006 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.520742 kubelet[2895]: E0912 17:42:17.520013 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.520742 kubelet[2895]: E0912 17:42:17.520115 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.520742 kubelet[2895]: W0912 17:42:17.520122 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.520742 kubelet[2895]: E0912 17:42:17.520130 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.520742 kubelet[2895]: E0912 17:42:17.520242 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.520742 kubelet[2895]: W0912 17:42:17.520248 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.520742 kubelet[2895]: E0912 17:42:17.520254 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.521767 kubelet[2895]: E0912 17:42:17.520401 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.521767 kubelet[2895]: W0912 17:42:17.520408 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.521767 kubelet[2895]: E0912 17:42:17.520416 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.521767 kubelet[2895]: E0912 17:42:17.520586 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.521767 kubelet[2895]: W0912 17:42:17.520594 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.521767 kubelet[2895]: E0912 17:42:17.520605 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.521767 kubelet[2895]: E0912 17:42:17.520759 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.521767 kubelet[2895]: W0912 17:42:17.520765 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.521767 kubelet[2895]: E0912 17:42:17.520779 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.521767 kubelet[2895]: E0912 17:42:17.520930 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.522645 kubelet[2895]: W0912 17:42:17.520941 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.522645 kubelet[2895]: E0912 17:42:17.520958 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.522645 kubelet[2895]: E0912 17:42:17.521340 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.522645 kubelet[2895]: W0912 17:42:17.521347 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.522645 kubelet[2895]: E0912 17:42:17.521360 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.522645 kubelet[2895]: E0912 17:42:17.521502 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.522645 kubelet[2895]: W0912 17:42:17.521509 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.522645 kubelet[2895]: E0912 17:42:17.521520 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.522645 kubelet[2895]: E0912 17:42:17.521984 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.522645 kubelet[2895]: W0912 17:42:17.521992 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.523059 kubelet[2895]: E0912 17:42:17.522017 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.523059 kubelet[2895]: E0912 17:42:17.522423 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.523059 kubelet[2895]: W0912 17:42:17.522433 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.523059 kubelet[2895]: E0912 17:42:17.522445 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.523059 kubelet[2895]: E0912 17:42:17.522562 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.523059 kubelet[2895]: W0912 17:42:17.522568 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.523059 kubelet[2895]: E0912 17:42:17.522574 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.523059 kubelet[2895]: E0912 17:42:17.522785 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.523059 kubelet[2895]: W0912 17:42:17.522792 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.523059 kubelet[2895]: E0912 17:42:17.522801 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.523337 kubelet[2895]: E0912 17:42:17.523126 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.523337 kubelet[2895]: W0912 17:42:17.523134 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.523337 kubelet[2895]: E0912 17:42:17.523143 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.524680 kubelet[2895]: E0912 17:42:17.523780 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.524680 kubelet[2895]: W0912 17:42:17.523790 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.524680 kubelet[2895]: E0912 17:42:17.523801 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.524680 kubelet[2895]: E0912 17:42:17.524637 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.524680 kubelet[2895]: W0912 17:42:17.524644 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.524680 kubelet[2895]: E0912 17:42:17.524664 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.524868 kubelet[2895]: E0912 17:42:17.524773 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.524868 kubelet[2895]: W0912 17:42:17.524784 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.524868 kubelet[2895]: E0912 17:42:17.524792 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.524952 kubelet[2895]: E0912 17:42:17.524877 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.524952 kubelet[2895]: W0912 17:42:17.524885 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.524952 kubelet[2895]: E0912 17:42:17.524891 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.525037 kubelet[2895]: E0912 17:42:17.524998 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.525037 kubelet[2895]: W0912 17:42:17.525007 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.525037 kubelet[2895]: E0912 17:42:17.525014 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.525227 kubelet[2895]: E0912 17:42:17.525215 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.525227 kubelet[2895]: W0912 17:42:17.525225 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.525283 kubelet[2895]: E0912 17:42:17.525231 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:17.527505 kubelet[2895]: E0912 17:42:17.527491 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:17.527583 kubelet[2895]: W0912 17:42:17.527574 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:17.527644 kubelet[2895]: E0912 17:42:17.527634 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.327213 containerd[1641]: time="2025-09-12T17:42:18.327184463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:18.333774 containerd[1641]: time="2025-09-12T17:42:18.333743742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:42:18.341197 containerd[1641]: time="2025-09-12T17:42:18.341181845Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:18.352897 containerd[1641]: time="2025-09-12T17:42:18.352803883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:18.361684 containerd[1641]: time="2025-09-12T17:42:18.353203608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.944505489s" Sep 12 17:42:18.361684 containerd[1641]: time="2025-09-12T17:42:18.353219959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:42:18.368161 containerd[1641]: time="2025-09-12T17:42:18.368072792Z" level=info msg="CreateContainer within sandbox \"ecf7d5974b16f3140a535eb36316ec7f797ce23aec62d9aea3ecd09f453fd2aa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:42:18.395861 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:18.412796 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:18.395892 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:18.422068 kubelet[2895]: I0912 17:42:18.422051 2895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:18.429481 kubelet[2895]: E0912 17:42:18.429373 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.429481 kubelet[2895]: W0912 17:42:18.429384 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.429481 kubelet[2895]: E0912 17:42:18.429407 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.429696 kubelet[2895]: E0912 17:42:18.429542 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.429696 kubelet[2895]: W0912 17:42:18.429558 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.429696 kubelet[2895]: E0912 17:42:18.429565 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.429868 kubelet[2895]: E0912 17:42:18.429729 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.429868 kubelet[2895]: W0912 17:42:18.429743 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.429868 kubelet[2895]: E0912 17:42:18.429750 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.430014 kubelet[2895]: E0912 17:42:18.429947 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.430014 kubelet[2895]: W0912 17:42:18.429953 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.430014 kubelet[2895]: E0912 17:42:18.429959 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.430174 kubelet[2895]: E0912 17:42:18.430122 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.430174 kubelet[2895]: W0912 17:42:18.430128 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.430174 kubelet[2895]: E0912 17:42:18.430134 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.430328 kubelet[2895]: E0912 17:42:18.430266 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.430328 kubelet[2895]: W0912 17:42:18.430271 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.430328 kubelet[2895]: E0912 17:42:18.430276 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.430424 kubelet[2895]: E0912 17:42:18.430419 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.430488 kubelet[2895]: W0912 17:42:18.430453 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.430488 kubelet[2895]: E0912 17:42:18.430460 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.430666 kubelet[2895]: E0912 17:42:18.430634 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.430666 kubelet[2895]: W0912 17:42:18.430640 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.430752 kubelet[2895]: E0912 17:42:18.430645 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.430918 kubelet[2895]: E0912 17:42:18.430863 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.430918 kubelet[2895]: W0912 17:42:18.430871 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.430918 kubelet[2895]: E0912 17:42:18.430876 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.431068 kubelet[2895]: E0912 17:42:18.431011 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.431068 kubelet[2895]: W0912 17:42:18.431017 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.431068 kubelet[2895]: E0912 17:42:18.431022 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.431223 kubelet[2895]: E0912 17:42:18.431164 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.431223 kubelet[2895]: W0912 17:42:18.431170 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.431223 kubelet[2895]: E0912 17:42:18.431175 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.431318 kubelet[2895]: E0912 17:42:18.431313 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.431347 kubelet[2895]: W0912 17:42:18.431342 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.431394 kubelet[2895]: E0912 17:42:18.431384 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.431618 kubelet[2895]: E0912 17:42:18.431563 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.431618 kubelet[2895]: W0912 17:42:18.431569 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.431618 kubelet[2895]: E0912 17:42:18.431574 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.431781 kubelet[2895]: E0912 17:42:18.431721 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.431781 kubelet[2895]: W0912 17:42:18.431726 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.431781 kubelet[2895]: E0912 17:42:18.431731 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.431952 kubelet[2895]: E0912 17:42:18.431879 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.431952 kubelet[2895]: W0912 17:42:18.431885 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.431952 kubelet[2895]: E0912 17:42:18.431890 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.432152 kubelet[2895]: E0912 17:42:18.432108 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.432152 kubelet[2895]: W0912 17:42:18.432114 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.432152 kubelet[2895]: E0912 17:42:18.432119 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.432381 kubelet[2895]: E0912 17:42:18.432300 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.432381 kubelet[2895]: W0912 17:42:18.432306 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.432381 kubelet[2895]: E0912 17:42:18.432325 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.432513 kubelet[2895]: E0912 17:42:18.432479 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.432513 kubelet[2895]: W0912 17:42:18.432485 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.432513 kubelet[2895]: E0912 17:42:18.432491 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.432677 kubelet[2895]: E0912 17:42:18.432598 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.432677 kubelet[2895]: W0912 17:42:18.432608 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.432677 kubelet[2895]: E0912 17:42:18.432618 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.432821 kubelet[2895]: E0912 17:42:18.432782 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.432821 kubelet[2895]: W0912 17:42:18.432788 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.432821 kubelet[2895]: E0912 17:42:18.432798 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.432906 kubelet[2895]: E0912 17:42:18.432886 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.432906 kubelet[2895]: W0912 17:42:18.432891 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.432906 kubelet[2895]: E0912 17:42:18.432898 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.433950 kubelet[2895]: E0912 17:42:18.432986 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.433950 kubelet[2895]: W0912 17:42:18.432991 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.433950 kubelet[2895]: E0912 17:42:18.432999 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.433950 kubelet[2895]: E0912 17:42:18.433074 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.433950 kubelet[2895]: W0912 17:42:18.433078 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.433950 kubelet[2895]: E0912 17:42:18.433084 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.433950 kubelet[2895]: E0912 17:42:18.433176 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.433950 kubelet[2895]: W0912 17:42:18.433181 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.433950 kubelet[2895]: E0912 17:42:18.433186 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.433950 kubelet[2895]: E0912 17:42:18.433437 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.434240 kubelet[2895]: W0912 17:42:18.433443 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.434240 kubelet[2895]: E0912 17:42:18.433449 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.434240 kubelet[2895]: E0912 17:42:18.433532 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.434240 kubelet[2895]: W0912 17:42:18.433537 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.434240 kubelet[2895]: E0912 17:42:18.433542 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.434240 kubelet[2895]: E0912 17:42:18.433614 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.434240 kubelet[2895]: W0912 17:42:18.433619 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.434240 kubelet[2895]: E0912 17:42:18.433624 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.434240 kubelet[2895]: E0912 17:42:18.433730 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.434240 kubelet[2895]: W0912 17:42:18.433734 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.434440 kubelet[2895]: E0912 17:42:18.433739 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.434440 kubelet[2895]: E0912 17:42:18.433880 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.434440 kubelet[2895]: W0912 17:42:18.433884 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.434440 kubelet[2895]: E0912 17:42:18.433889 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.434440 kubelet[2895]: E0912 17:42:18.433978 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.434440 kubelet[2895]: W0912 17:42:18.433984 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.434440 kubelet[2895]: E0912 17:42:18.433989 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.434440 kubelet[2895]: E0912 17:42:18.434069 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.434440 kubelet[2895]: W0912 17:42:18.434074 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.434440 kubelet[2895]: E0912 17:42:18.434079 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.434635 kubelet[2895]: E0912 17:42:18.434169 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.434635 kubelet[2895]: W0912 17:42:18.434174 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.434635 kubelet[2895]: E0912 17:42:18.434180 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.434635 kubelet[2895]: E0912 17:42:18.434361 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:42:18.434635 kubelet[2895]: W0912 17:42:18.434366 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:42:18.434635 kubelet[2895]: E0912 17:42:18.434371 2895 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:42:18.462835 containerd[1641]: time="2025-09-12T17:42:18.462347025Z" level=info msg="CreateContainer within sandbox \"ecf7d5974b16f3140a535eb36316ec7f797ce23aec62d9aea3ecd09f453fd2aa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"82e697118e84e24922cd551364ee86e46a4df9db501840ed3ea6e4ab2e7ea00e\"" Sep 12 17:42:18.464018 containerd[1641]: time="2025-09-12T17:42:18.463607672Z" level=info msg="StartContainer for \"82e697118e84e24922cd551364ee86e46a4df9db501840ed3ea6e4ab2e7ea00e\"" Sep 12 17:42:18.524399 containerd[1641]: time="2025-09-12T17:42:18.524369634Z" level=info msg="StartContainer for \"82e697118e84e24922cd551364ee86e46a4df9db501840ed3ea6e4ab2e7ea00e\" returns successfully" Sep 12 17:42:18.549263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82e697118e84e24922cd551364ee86e46a4df9db501840ed3ea6e4ab2e7ea00e-rootfs.mount: Deactivated successfully. Sep 12 17:42:19.179890 kubelet[2895]: E0912 17:42:19.179846 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vqsr" podUID="6c2d1714-1041-45d2-9888-c6e010910454" Sep 12 17:42:19.423166 containerd[1641]: time="2025-09-12T17:42:19.414318501Z" level=info msg="shim disconnected" id=82e697118e84e24922cd551364ee86e46a4df9db501840ed3ea6e4ab2e7ea00e namespace=k8s.io Sep 12 17:42:19.423166 containerd[1641]: time="2025-09-12T17:42:19.422941980Z" level=warning msg="cleaning up after shim disconnected" id=82e697118e84e24922cd551364ee86e46a4df9db501840ed3ea6e4ab2e7ea00e namespace=k8s.io Sep 12 17:42:19.423166 containerd[1641]: time="2025-09-12T17:42:19.422951386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:20.427231 containerd[1641]: time="2025-09-12T17:42:20.427122546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:42:21.179367 kubelet[2895]: E0912 17:42:21.179330 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vqsr" podUID="6c2d1714-1041-45d2-9888-c6e010910454" Sep 12 17:42:23.179171 kubelet[2895]: E0912 17:42:23.179138 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vqsr" podUID="6c2d1714-1041-45d2-9888-c6e010910454" Sep 12 17:42:23.527100 containerd[1641]: time="2025-09-12T17:42:23.527062405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:23.527584 containerd[1641]: time="2025-09-12T17:42:23.527554527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:42:23.528197 containerd[1641]: time="2025-09-12T17:42:23.528008935Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:23.530929 containerd[1641]: time="2025-09-12T17:42:23.530903260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:23.534338 containerd[1641]: time="2025-09-12T17:42:23.533750465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.106519259s" Sep 12 17:42:23.534338 containerd[1641]: time="2025-09-12T17:42:23.533771140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:42:23.536457 containerd[1641]: time="2025-09-12T17:42:23.536187949Z" level=info msg="CreateContainer within sandbox \"ecf7d5974b16f3140a535eb36316ec7f797ce23aec62d9aea3ecd09f453fd2aa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:42:23.554371 containerd[1641]: time="2025-09-12T17:42:23.554311735Z" level=info msg="CreateContainer within sandbox \"ecf7d5974b16f3140a535eb36316ec7f797ce23aec62d9aea3ecd09f453fd2aa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"16e78cffa78677f4f15c40bd652f5ddcdb39bc1baf6b0791bd8374f58cc09392\"" Sep 12 17:42:23.555241 containerd[1641]: time="2025-09-12T17:42:23.554756707Z" level=info msg="StartContainer for \"16e78cffa78677f4f15c40bd652f5ddcdb39bc1baf6b0791bd8374f58cc09392\"" Sep 12 17:42:23.578403 systemd[1]: run-containerd-runc-k8s.io-16e78cffa78677f4f15c40bd652f5ddcdb39bc1baf6b0791bd8374f58cc09392-runc.9EOy3U.mount: Deactivated successfully. Sep 12 17:42:23.608681 containerd[1641]: time="2025-09-12T17:42:23.608660008Z" level=info msg="StartContainer for \"16e78cffa78677f4f15c40bd652f5ddcdb39bc1baf6b0791bd8374f58cc09392\" returns successfully" Sep 12 17:42:25.179717 kubelet[2895]: E0912 17:42:25.179685 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vqsr" podUID="6c2d1714-1041-45d2-9888-c6e010910454" Sep 12 17:42:25.289175 containerd[1641]: time="2025-09-12T17:42:25.287276529Z" level=info msg="shim disconnected" id=16e78cffa78677f4f15c40bd652f5ddcdb39bc1baf6b0791bd8374f58cc09392 namespace=k8s.io Sep 12 17:42:25.289175 containerd[1641]: time="2025-09-12T17:42:25.287330683Z" level=warning msg="cleaning up after shim disconnected" id=16e78cffa78677f4f15c40bd652f5ddcdb39bc1baf6b0791bd8374f58cc09392 namespace=k8s.io Sep 12 17:42:25.289175 containerd[1641]: time="2025-09-12T17:42:25.287338231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:42:25.288055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16e78cffa78677f4f15c40bd652f5ddcdb39bc1baf6b0791bd8374f58cc09392-rootfs.mount: Deactivated successfully. Sep 12 17:42:25.302157 kubelet[2895]: I0912 17:42:25.300812 2895 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:42:25.435176 containerd[1641]: time="2025-09-12T17:42:25.434504767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:42:25.481754 kubelet[2895]: I0912 17:42:25.481713 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c40b048-393b-4ff5-8a75-2bc642369f86-goldmane-ca-bundle\") pod \"goldmane-7988f88666-drjps\" (UID: \"4c40b048-393b-4ff5-8a75-2bc642369f86\") " pod="calico-system/goldmane-7988f88666-drjps" Sep 12 17:42:25.481754 kubelet[2895]: I0912 17:42:25.481760 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxljb\" (UniqueName: \"kubernetes.io/projected/66750a83-1e25-4052-acac-1ee5648a6796-kube-api-access-qxljb\") pod \"coredns-7c65d6cfc9-2clqw\" (UID: \"66750a83-1e25-4052-acac-1ee5648a6796\") " pod="kube-system/coredns-7c65d6cfc9-2clqw" Sep 12 17:42:25.481877 kubelet[2895]: I0912 17:42:25.481784 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjh9x\" (UniqueName: \"kubernetes.io/projected/24866c61-e05d-457b-a1e7-0f1c845f8a0f-kube-api-access-vjh9x\") pod \"calico-kube-controllers-c8d94b68-lfb95\" (UID: \"24866c61-e05d-457b-a1e7-0f1c845f8a0f\") " pod="calico-system/calico-kube-controllers-c8d94b68-lfb95" Sep 12 17:42:25.481877 kubelet[2895]: I0912 17:42:25.481802 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4c40b048-393b-4ff5-8a75-2bc642369f86-goldmane-key-pair\") pod \"goldmane-7988f88666-drjps\" (UID: \"4c40b048-393b-4ff5-8a75-2bc642369f86\") " pod="calico-system/goldmane-7988f88666-drjps" Sep 12 17:42:25.481877 kubelet[2895]: I0912 17:42:25.481819 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66750a83-1e25-4052-acac-1ee5648a6796-config-volume\") pod \"coredns-7c65d6cfc9-2clqw\" (UID: \"66750a83-1e25-4052-acac-1ee5648a6796\") " pod="kube-system/coredns-7c65d6cfc9-2clqw" Sep 12 17:42:25.481877 kubelet[2895]: I0912 17:42:25.481836 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21242ab1-1a34-4925-bcf3-2c7f923b75c1-config-volume\") pod \"coredns-7c65d6cfc9-7mxql\" (UID: \"21242ab1-1a34-4925-bcf3-2c7f923b75c1\") " pod="kube-system/coredns-7c65d6cfc9-7mxql" Sep 12 17:42:25.481877 kubelet[2895]: I0912 17:42:25.481854 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77795\" (UniqueName: \"kubernetes.io/projected/0e952190-a867-4e80-a571-c82f9ac73a21-kube-api-access-77795\") pod \"whisker-b96f44689-f2d8j\" (UID: \"0e952190-a867-4e80-a571-c82f9ac73a21\") " pod="calico-system/whisker-b96f44689-f2d8j" Sep 12 17:42:25.482295 kubelet[2895]: I0912 17:42:25.481880 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe-calico-apiserver-certs\") pod \"calico-apiserver-6dfb6749f-dhd9m\" (UID: \"5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe\") " pod="calico-apiserver/calico-apiserver-6dfb6749f-dhd9m" Sep 12 17:42:25.482295 kubelet[2895]: I0912 17:42:25.481894 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24866c61-e05d-457b-a1e7-0f1c845f8a0f-tigera-ca-bundle\") pod \"calico-kube-controllers-c8d94b68-lfb95\" (UID: \"24866c61-e05d-457b-a1e7-0f1c845f8a0f\") " pod="calico-system/calico-kube-controllers-c8d94b68-lfb95" Sep 12 17:42:25.482295 kubelet[2895]: I0912 17:42:25.481903 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c40b048-393b-4ff5-8a75-2bc642369f86-config\") pod \"goldmane-7988f88666-drjps\" (UID: \"4c40b048-393b-4ff5-8a75-2bc642369f86\") " pod="calico-system/goldmane-7988f88666-drjps" Sep 12 17:42:25.482295 kubelet[2895]: I0912 17:42:25.481916 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5b939991-56b9-412e-bee1-03c9df66c4a5-calico-apiserver-certs\") pod \"calico-apiserver-6dfb6749f-f67rq\" (UID: \"5b939991-56b9-412e-bee1-03c9df66c4a5\") " pod="calico-apiserver/calico-apiserver-6dfb6749f-f67rq" Sep 12 17:42:25.482295 kubelet[2895]: I0912 17:42:25.481931 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4jfx\" (UniqueName: \"kubernetes.io/projected/21242ab1-1a34-4925-bcf3-2c7f923b75c1-kube-api-access-p4jfx\") pod \"coredns-7c65d6cfc9-7mxql\" (UID: \"21242ab1-1a34-4925-bcf3-2c7f923b75c1\") " pod="kube-system/coredns-7c65d6cfc9-7mxql" Sep 12 17:42:25.482737 kubelet[2895]: I0912 17:42:25.481941 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e952190-a867-4e80-a571-c82f9ac73a21-whisker-backend-key-pair\") pod \"whisker-b96f44689-f2d8j\" (UID: \"0e952190-a867-4e80-a571-c82f9ac73a21\") " pod="calico-system/whisker-b96f44689-f2d8j" Sep 12 17:42:25.482737 kubelet[2895]: I0912 17:42:25.481951 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e952190-a867-4e80-a571-c82f9ac73a21-whisker-ca-bundle\") pod \"whisker-b96f44689-f2d8j\" (UID: \"0e952190-a867-4e80-a571-c82f9ac73a21\") " pod="calico-system/whisker-b96f44689-f2d8j" Sep 12 17:42:25.482737 kubelet[2895]: I0912 17:42:25.481960 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms8mg\" (UniqueName: \"kubernetes.io/projected/5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe-kube-api-access-ms8mg\") pod \"calico-apiserver-6dfb6749f-dhd9m\" (UID: \"5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe\") " pod="calico-apiserver/calico-apiserver-6dfb6749f-dhd9m" Sep 12 17:42:25.482737 kubelet[2895]: I0912 17:42:25.481970 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9ng\" (UniqueName: \"kubernetes.io/projected/4c40b048-393b-4ff5-8a75-2bc642369f86-kube-api-access-5d9ng\") pod \"goldmane-7988f88666-drjps\" (UID: \"4c40b048-393b-4ff5-8a75-2bc642369f86\") " pod="calico-system/goldmane-7988f88666-drjps" Sep 12 17:42:25.482737 kubelet[2895]: I0912 17:42:25.482004 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvkr\" (UniqueName: \"kubernetes.io/projected/5b939991-56b9-412e-bee1-03c9df66c4a5-kube-api-access-zzvkr\") pod \"calico-apiserver-6dfb6749f-f67rq\" (UID: \"5b939991-56b9-412e-bee1-03c9df66c4a5\") " pod="calico-apiserver/calico-apiserver-6dfb6749f-f67rq" Sep 12 17:42:25.631252 containerd[1641]: time="2025-09-12T17:42:25.631222766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfb6749f-dhd9m,Uid:5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:42:25.636562 containerd[1641]: time="2025-09-12T17:42:25.636532048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8d94b68-lfb95,Uid:24866c61-e05d-457b-a1e7-0f1c845f8a0f,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:25.642506 containerd[1641]: time="2025-09-12T17:42:25.642386568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7mxql,Uid:21242ab1-1a34-4925-bcf3-2c7f923b75c1,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:25.655756 containerd[1641]: time="2025-09-12T17:42:25.655365884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfb6749f-f67rq,Uid:5b939991-56b9-412e-bee1-03c9df66c4a5,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:42:25.661392 containerd[1641]: time="2025-09-12T17:42:25.661364373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b96f44689-f2d8j,Uid:0e952190-a867-4e80-a571-c82f9ac73a21,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:25.664204 containerd[1641]: time="2025-09-12T17:42:25.664184032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2clqw,Uid:66750a83-1e25-4052-acac-1ee5648a6796,Namespace:kube-system,Attempt:0,}" Sep 12 17:42:25.664571 containerd[1641]: time="2025-09-12T17:42:25.664559546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-drjps,Uid:4c40b048-393b-4ff5-8a75-2bc642369f86,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:25.983039 containerd[1641]: time="2025-09-12T17:42:25.981635358Z" level=error msg="Failed to destroy network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.983332 containerd[1641]: time="2025-09-12T17:42:25.983314853Z" level=error msg="Failed to destroy network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.986579 containerd[1641]: time="2025-09-12T17:42:25.986559888Z" level=error msg="Failed to destroy network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.987018 containerd[1641]: time="2025-09-12T17:42:25.986956467Z" level=error msg="encountered an error cleaning up failed sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.987642 containerd[1641]: time="2025-09-12T17:42:25.987417519Z" level=error msg="Failed to destroy network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.987642 containerd[1641]: time="2025-09-12T17:42:25.987598071Z" level=error msg="encountered an error cleaning up failed sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.991305 containerd[1641]: time="2025-09-12T17:42:25.990763210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2clqw,Uid:66750a83-1e25-4052-acac-1ee5648a6796,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.991708 containerd[1641]: time="2025-09-12T17:42:25.991361423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8d94b68-lfb95,Uid:24866c61-e05d-457b-a1e7-0f1c845f8a0f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.994825 kubelet[2895]: E0912 17:42:25.994286 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.994825 kubelet[2895]: E0912 17:42:25.994331 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2clqw" Sep 12 17:42:25.994825 kubelet[2895]: E0912 17:42:25.994351 2895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2clqw" Sep 12 17:42:25.994920 containerd[1641]: time="2025-09-12T17:42:25.994569242Z" level=error msg="Failed to destroy network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.994920 containerd[1641]: time="2025-09-12T17:42:25.986681927Z" level=error msg="encountered an error cleaning up failed sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.994920 containerd[1641]: time="2025-09-12T17:42:25.994781863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfb6749f-f67rq,Uid:5b939991-56b9-412e-bee1-03c9df66c4a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.994992 kubelet[2895]: E0912 17:42:25.994385 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2clqw_kube-system(66750a83-1e25-4052-acac-1ee5648a6796)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2clqw_kube-system(66750a83-1e25-4052-acac-1ee5648a6796)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2clqw" podUID="66750a83-1e25-4052-acac-1ee5648a6796" Sep 12 17:42:25.994992 kubelet[2895]: E0912 17:42:25.994261 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.994992 kubelet[2895]: E0912 17:42:25.994477 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8d94b68-lfb95" Sep 12 17:42:25.995067 kubelet[2895]: E0912 17:42:25.994488 2895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8d94b68-lfb95" Sep 12 17:42:25.995067 kubelet[2895]: E0912 17:42:25.994503 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c8d94b68-lfb95_calico-system(24866c61-e05d-457b-a1e7-0f1c845f8a0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c8d94b68-lfb95_calico-system(24866c61-e05d-457b-a1e7-0f1c845f8a0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c8d94b68-lfb95" podUID="24866c61-e05d-457b-a1e7-0f1c845f8a0f" Sep 12 17:42:25.995067 kubelet[2895]: E0912 17:42:25.994865 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.995142 kubelet[2895]: E0912 17:42:25.994881 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dfb6749f-f67rq" Sep 12 17:42:25.995142 kubelet[2895]: E0912 17:42:25.994891 2895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dfb6749f-f67rq" Sep 12 17:42:25.995142 kubelet[2895]: E0912 17:42:25.994915 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dfb6749f-f67rq_calico-apiserver(5b939991-56b9-412e-bee1-03c9df66c4a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dfb6749f-f67rq_calico-apiserver(5b939991-56b9-412e-bee1-03c9df66c4a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dfb6749f-f67rq" podUID="5b939991-56b9-412e-bee1-03c9df66c4a5" Sep 12 17:42:25.995381 containerd[1641]: time="2025-09-12T17:42:25.994805419Z" level=error msg="Failed to destroy network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.995412 containerd[1641]: time="2025-09-12T17:42:25.994786136Z" level=error msg="encountered an error cleaning up failed sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.995468 containerd[1641]: time="2025-09-12T17:42:25.995449364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfb6749f-dhd9m,Uid:5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.995523 containerd[1641]: time="2025-09-12T17:42:25.986685400Z" level=error msg="encountered an error cleaning up failed sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.995553 containerd[1641]: time="2025-09-12T17:42:25.995527521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b96f44689-f2d8j,Uid:0e952190-a867-4e80-a571-c82f9ac73a21,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.995553 containerd[1641]: time="2025-09-12T17:42:25.986721143Z" level=error msg="Failed to destroy network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.995726 containerd[1641]: time="2025-09-12T17:42:25.995705536Z" level=error msg="encountered an error cleaning up failed sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.996270 containerd[1641]: time="2025-09-12T17:42:25.996245953Z" level=error msg="encountered an error cleaning up failed sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.996712 containerd[1641]: time="2025-09-12T17:42:25.996431928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7mxql,Uid:21242ab1-1a34-4925-bcf3-2c7f923b75c1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.996712 containerd[1641]: time="2025-09-12T17:42:25.996463137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-drjps,Uid:4c40b048-393b-4ff5-8a75-2bc642369f86,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.996770 kubelet[2895]: E0912 17:42:25.996311 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.996770 kubelet[2895]: E0912 17:42:25.996327 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b96f44689-f2d8j" Sep 12 17:42:25.996770 kubelet[2895]: E0912 17:42:25.996337 2895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b96f44689-f2d8j" Sep 12 17:42:25.996839 kubelet[2895]: E0912 17:42:25.996354 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b96f44689-f2d8j_calico-system(0e952190-a867-4e80-a571-c82f9ac73a21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b96f44689-f2d8j_calico-system(0e952190-a867-4e80-a571-c82f9ac73a21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b96f44689-f2d8j" podUID="0e952190-a867-4e80-a571-c82f9ac73a21" Sep 12 17:42:25.996839 kubelet[2895]: E0912 17:42:25.996372 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.996839 kubelet[2895]: E0912 17:42:25.996382 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dfb6749f-dhd9m" Sep 12 17:42:25.996908 kubelet[2895]: E0912 17:42:25.996501 2895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dfb6749f-dhd9m" Sep 12 17:42:25.996908 kubelet[2895]: E0912 17:42:25.996516 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dfb6749f-dhd9m_calico-apiserver(5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dfb6749f-dhd9m_calico-apiserver(5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dfb6749f-dhd9m" podUID="5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe" Sep 12 17:42:25.996908 kubelet[2895]: E0912 17:42:25.996628 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.997145 kubelet[2895]: E0912 17:42:25.996642 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-drjps" Sep 12 17:42:25.997169 kubelet[2895]: E0912 17:42:25.997156 2895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-drjps" Sep 12 17:42:25.997190 kubelet[2895]: E0912 17:42:25.997175 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-drjps_calico-system(4c40b048-393b-4ff5-8a75-2bc642369f86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-drjps_calico-system(4c40b048-393b-4ff5-8a75-2bc642369f86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-drjps" podUID="4c40b048-393b-4ff5-8a75-2bc642369f86" Sep 12 17:42:25.997190 kubelet[2895]: E0912 17:42:25.997120 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:25.997239 kubelet[2895]: E0912 17:42:25.997195 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7mxql" Sep 12 17:42:25.997239 kubelet[2895]: E0912 17:42:25.997214 2895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7mxql" Sep 12 17:42:25.997239 kubelet[2895]: E0912 17:42:25.997234 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7mxql_kube-system(21242ab1-1a34-4925-bcf3-2c7f923b75c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7mxql_kube-system(21242ab1-1a34-4925-bcf3-2c7f923b75c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7mxql" podUID="21242ab1-1a34-4925-bcf3-2c7f923b75c1" Sep 12 17:42:26.436224 kubelet[2895]: I0912 17:42:26.436158 2895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:26.438237 kubelet[2895]: I0912 17:42:26.438029 2895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:42:26.463272 kubelet[2895]: I0912 17:42:26.463036 2895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:42:26.464993 kubelet[2895]: I0912 17:42:26.464406 2895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:42:26.468519 kubelet[2895]: I0912 17:42:26.468499 2895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:26.470991 kubelet[2895]: I0912 17:42:26.470962 2895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:42:26.473079 kubelet[2895]: I0912 17:42:26.473063 2895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:42:26.495152 containerd[1641]: time="2025-09-12T17:42:26.495128465Z" level=info msg="StopPodSandbox for \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\"" Sep 12 17:42:26.496378 containerd[1641]: time="2025-09-12T17:42:26.495628645Z" level=info msg="StopPodSandbox for \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\"" Sep 12 17:42:26.496378 containerd[1641]: time="2025-09-12T17:42:26.496174645Z" level=info msg="Ensure that sandbox 892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208 in task-service has been cleanup successfully" Sep 12 17:42:26.496378 containerd[1641]: time="2025-09-12T17:42:26.496201397Z" level=info msg="StopPodSandbox for \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\"" Sep 12 17:42:26.496378 containerd[1641]: time="2025-09-12T17:42:26.496271120Z" level=info msg="Ensure that sandbox b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb in task-service has been cleanup successfully" Sep 12 17:42:26.496984 containerd[1641]: time="2025-09-12T17:42:26.496969500Z" level=info msg="StopPodSandbox for \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\"" Sep 12 17:42:26.497074 containerd[1641]: time="2025-09-12T17:42:26.497057799Z" level=info msg="Ensure that sandbox 341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8 in task-service has been cleanup successfully" Sep 12 17:42:26.497153 containerd[1641]: time="2025-09-12T17:42:26.497137733Z" level=info msg="StopPodSandbox for \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\"" Sep 12 17:42:26.497229 containerd[1641]: time="2025-09-12T17:42:26.497217351Z" level=info msg="Ensure that sandbox 9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2 in task-service has been cleanup successfully" Sep 12 17:42:26.497782 containerd[1641]: time="2025-09-12T17:42:26.497770747Z" level=info msg="StopPodSandbox for \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\"" Sep 12 17:42:26.497924 containerd[1641]: time="2025-09-12T17:42:26.497914224Z" level=info msg="Ensure that sandbox a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b in task-service has been cleanup successfully" Sep 12 17:42:26.498164 containerd[1641]: time="2025-09-12T17:42:26.497790154Z" level=info msg="StopPodSandbox for \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\"" Sep 12 17:42:26.498266 containerd[1641]: time="2025-09-12T17:42:26.498250969Z" level=info msg="Ensure that sandbox 8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98 in task-service has been cleanup successfully" Sep 12 17:42:26.499325 containerd[1641]: time="2025-09-12T17:42:26.496178795Z" level=info msg="Ensure that sandbox 17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245 in task-service has been cleanup successfully" Sep 12 17:42:26.532391 containerd[1641]: time="2025-09-12T17:42:26.532317878Z" level=error msg="StopPodSandbox for \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\" failed" error="failed to destroy network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:26.532628 kubelet[2895]: E0912 17:42:26.532548 2895 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:42:26.539239 kubelet[2895]: E0912 17:42:26.532642 2895 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208"} Sep 12 17:42:26.539239 kubelet[2895]: E0912 17:42:26.539036 2895 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66750a83-1e25-4052-acac-1ee5648a6796\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:26.539239 kubelet[2895]: E0912 17:42:26.539063 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66750a83-1e25-4052-acac-1ee5648a6796\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2clqw" podUID="66750a83-1e25-4052-acac-1ee5648a6796" Sep 12 17:42:26.543253 containerd[1641]: time="2025-09-12T17:42:26.543228356Z" level=error msg="StopPodSandbox for \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\" failed" error="failed to destroy network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:26.543420 kubelet[2895]: E0912 17:42:26.543398 2895 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:42:26.543462 kubelet[2895]: E0912 17:42:26.543426 2895 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2"} Sep 12 17:42:26.543462 kubelet[2895]: E0912 17:42:26.543443 2895 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:26.543516 kubelet[2895]: E0912 17:42:26.543459 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dfb6749f-dhd9m" podUID="5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe" Sep 12 17:42:26.545096 containerd[1641]: time="2025-09-12T17:42:26.545052350Z" level=error msg="StopPodSandbox for \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\" failed" error="failed to destroy network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:26.545151 kubelet[2895]: E0912 17:42:26.545131 2895 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:42:26.545419 kubelet[2895]: E0912 17:42:26.545335 2895 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245"} Sep 12 17:42:26.545419 kubelet[2895]: E0912 17:42:26.545356 2895 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e952190-a867-4e80-a571-c82f9ac73a21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:26.545419 kubelet[2895]: E0912 17:42:26.545368 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e952190-a867-4e80-a571-c82f9ac73a21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b96f44689-f2d8j" podUID="0e952190-a867-4e80-a571-c82f9ac73a21" Sep 12 17:42:26.554240 containerd[1641]: time="2025-09-12T17:42:26.554162207Z" level=error msg="StopPodSandbox for \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\" failed" error="failed to destroy network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:26.554465 kubelet[2895]: E0912 17:42:26.554356 2895 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:26.554465 kubelet[2895]: E0912 17:42:26.554388 2895 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb"} Sep 12 17:42:26.554465 kubelet[2895]: E0912 17:42:26.554409 2895 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c40b048-393b-4ff5-8a75-2bc642369f86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:26.554465 kubelet[2895]: E0912 17:42:26.554426 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c40b048-393b-4ff5-8a75-2bc642369f86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-drjps" podUID="4c40b048-393b-4ff5-8a75-2bc642369f86" Sep 12 17:42:26.558598 containerd[1641]: time="2025-09-12T17:42:26.558576485Z" level=error msg="StopPodSandbox for \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\" failed" error="failed to destroy network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:26.558830 kubelet[2895]: E0912 17:42:26.558737 2895 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:42:26.558917 kubelet[2895]: E0912 17:42:26.558863 2895 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8"} Sep 12 17:42:26.558943 kubelet[2895]: E0912 17:42:26.558915 2895 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21242ab1-1a34-4925-bcf3-2c7f923b75c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:26.559007 kubelet[2895]: E0912 17:42:26.558938 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21242ab1-1a34-4925-bcf3-2c7f923b75c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7mxql" podUID="21242ab1-1a34-4925-bcf3-2c7f923b75c1" Sep 12 17:42:26.560313 containerd[1641]: time="2025-09-12T17:42:26.560293877Z" level=error msg="StopPodSandbox for \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\" failed" error="failed to destroy network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:26.560444 kubelet[2895]: E0912 17:42:26.560428 2895 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:26.560474 kubelet[2895]: E0912 17:42:26.560447 2895 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b"} Sep 12 17:42:26.560474 kubelet[2895]: E0912 17:42:26.560463 2895 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b939991-56b9-412e-bee1-03c9df66c4a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:26.560529 kubelet[2895]: E0912 17:42:26.560475 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b939991-56b9-412e-bee1-03c9df66c4a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dfb6749f-f67rq" podUID="5b939991-56b9-412e-bee1-03c9df66c4a5" Sep 12 17:42:26.561157 containerd[1641]: time="2025-09-12T17:42:26.561136770Z" level=error msg="StopPodSandbox for \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\" failed" error="failed to destroy network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:26.561235 kubelet[2895]: E0912 17:42:26.561217 2895 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:42:26.561262 kubelet[2895]: E0912 17:42:26.561237 2895 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98"} Sep 12 17:42:26.561292 kubelet[2895]: E0912 17:42:26.561271 2895 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24866c61-e05d-457b-a1e7-0f1c845f8a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:26.561292 kubelet[2895]: E0912 17:42:26.561285 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24866c61-e05d-457b-a1e7-0f1c845f8a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c8d94b68-lfb95" podUID="24866c61-e05d-457b-a1e7-0f1c845f8a0f" Sep 12 17:42:27.190667 containerd[1641]: time="2025-09-12T17:42:27.188487752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vqsr,Uid:6c2d1714-1041-45d2-9888-c6e010910454,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:27.284070 containerd[1641]: time="2025-09-12T17:42:27.284043905Z" level=error msg="Failed to destroy network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:27.285784 containerd[1641]: time="2025-09-12T17:42:27.285767366Z" level=error msg="encountered an error cleaning up failed sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:27.285857 containerd[1641]: time="2025-09-12T17:42:27.285843963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vqsr,Uid:6c2d1714-1041-45d2-9888-c6e010910454,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:27.285884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d-shm.mount: Deactivated successfully. Sep 12 17:42:27.287087 kubelet[2895]: E0912 17:42:27.286212 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:27.287087 kubelet[2895]: E0912 17:42:27.286248 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6vqsr" Sep 12 17:42:27.287087 kubelet[2895]: E0912 17:42:27.286267 2895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6vqsr" Sep 12 17:42:27.287206 kubelet[2895]: E0912 17:42:27.286297 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6vqsr_calico-system(6c2d1714-1041-45d2-9888-c6e010910454)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6vqsr_calico-system(6c2d1714-1041-45d2-9888-c6e010910454)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6vqsr" podUID="6c2d1714-1041-45d2-9888-c6e010910454" Sep 12 17:42:27.474999 kubelet[2895]: I0912 17:42:27.474980 2895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:42:27.475758 containerd[1641]: time="2025-09-12T17:42:27.475733700Z" level=info msg="StopPodSandbox for \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\"" Sep 12 17:42:27.475846 containerd[1641]: time="2025-09-12T17:42:27.475832637Z" level=info msg="Ensure that sandbox 7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d in task-service has been cleanup successfully" Sep 12 17:42:27.491000 containerd[1641]: time="2025-09-12T17:42:27.490927515Z" level=error msg="StopPodSandbox for \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\" failed" error="failed to destroy network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:42:27.491112 kubelet[2895]: E0912 17:42:27.491085 2895 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:42:27.491157 kubelet[2895]: E0912 17:42:27.491133 2895 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d"} Sep 12 17:42:27.491180 kubelet[2895]: E0912 17:42:27.491158 2895 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c2d1714-1041-45d2-9888-c6e010910454\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:42:27.491180 kubelet[2895]: E0912 17:42:27.491172 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c2d1714-1041-45d2-9888-c6e010910454\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6vqsr" podUID="6c2d1714-1041-45d2-9888-c6e010910454" Sep 12 17:42:29.985256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441919245.mount: Deactivated successfully. Sep 12 17:42:30.146056 containerd[1641]: time="2025-09-12T17:42:30.146010301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:42:30.182725 containerd[1641]: time="2025-09-12T17:42:30.182303931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:30.216121 containerd[1641]: time="2025-09-12T17:42:30.216094824Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:30.220468 containerd[1641]: time="2025-09-12T17:42:30.217666468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.78195138s" Sep 12 17:42:30.220468 containerd[1641]: time="2025-09-12T17:42:30.217684438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:42:30.220468 containerd[1641]: time="2025-09-12T17:42:30.217935838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:30.358787 containerd[1641]: time="2025-09-12T17:42:30.358707910Z" level=info msg="CreateContainer within sandbox \"ecf7d5974b16f3140a535eb36316ec7f797ce23aec62d9aea3ecd09f453fd2aa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:42:30.363912 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:30.374799 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:30.363934 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:30.410862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591193622.mount: Deactivated successfully. Sep 12 17:42:30.422710 containerd[1641]: time="2025-09-12T17:42:30.422684550Z" level=info msg="CreateContainer within sandbox \"ecf7d5974b16f3140a535eb36316ec7f797ce23aec62d9aea3ecd09f453fd2aa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5abe30ba0d470c408579905d50477ee729e577ac232031938ad53d8caeafd017\"" Sep 12 17:42:30.426851 containerd[1641]: time="2025-09-12T17:42:30.426827386Z" level=info msg="StartContainer for \"5abe30ba0d470c408579905d50477ee729e577ac232031938ad53d8caeafd017\"" Sep 12 17:42:30.541729 containerd[1641]: time="2025-09-12T17:42:30.541584625Z" level=info msg="StartContainer for \"5abe30ba0d470c408579905d50477ee729e577ac232031938ad53d8caeafd017\" returns successfully" Sep 12 17:42:30.679992 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:42:30.681305 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:42:31.137711 containerd[1641]: time="2025-09-12T17:42:31.136512645Z" level=info msg="StopPodSandbox for \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\"" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:31.227 [INFO][4108] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:31.230 [INFO][4108] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" iface="eth0" netns="/var/run/netns/cni-198196fc-06d8-d6f2-bcc3-8bc39831f4b4" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:31.230 [INFO][4108] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" iface="eth0" netns="/var/run/netns/cni-198196fc-06d8-d6f2-bcc3-8bc39831f4b4" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:31.245 [INFO][4108] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" iface="eth0" netns="/var/run/netns/cni-198196fc-06d8-d6f2-bcc3-8bc39831f4b4" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:31.245 [INFO][4108] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:31.245 [INFO][4108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:32.008 [INFO][4119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" HandleID="k8s-pod-network.17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Workload="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:32.012 [INFO][4119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:32.012 [INFO][4119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:32.024 [WARNING][4119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" HandleID="k8s-pod-network.17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Workload="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:32.024 [INFO][4119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" HandleID="k8s-pod-network.17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Workload="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:32.039 [INFO][4119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:32.041601 containerd[1641]: 2025-09-12 17:42:32.040 [INFO][4108] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:42:32.043225 systemd[1]: run-netns-cni\x2d198196fc\x2d06d8\x2dd6f2\x2dbcc3\x2d8bc39831f4b4.mount: Deactivated successfully. Sep 12 17:42:32.043733 containerd[1641]: time="2025-09-12T17:42:32.043706428Z" level=info msg="TearDown network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\" successfully" Sep 12 17:42:32.043733 containerd[1641]: time="2025-09-12T17:42:32.043727547Z" level=info msg="StopPodSandbox for \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\" returns successfully" Sep 12 17:42:32.276365 kubelet[2895]: I0912 17:42:32.275513 2895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e952190-a867-4e80-a571-c82f9ac73a21-whisker-backend-key-pair\") pod \"0e952190-a867-4e80-a571-c82f9ac73a21\" (UID: \"0e952190-a867-4e80-a571-c82f9ac73a21\") " Sep 12 17:42:32.277790 kubelet[2895]: I0912 17:42:32.277706 2895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77795\" (UniqueName: \"kubernetes.io/projected/0e952190-a867-4e80-a571-c82f9ac73a21-kube-api-access-77795\") pod \"0e952190-a867-4e80-a571-c82f9ac73a21\" (UID: \"0e952190-a867-4e80-a571-c82f9ac73a21\") " Sep 12 17:42:32.289660 kubelet[2895]: I0912 17:42:32.287301 2895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e952190-a867-4e80-a571-c82f9ac73a21-whisker-ca-bundle\") pod \"0e952190-a867-4e80-a571-c82f9ac73a21\" (UID: \"0e952190-a867-4e80-a571-c82f9ac73a21\") " Sep 12 17:42:32.301116 systemd[1]: var-lib-kubelet-pods-0e952190\x2da867\x2d4e80\x2da571\x2dc82f9ac73a21-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:42:32.311636 systemd[1]: var-lib-kubelet-pods-0e952190\x2da867\x2d4e80\x2da571\x2dc82f9ac73a21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77795.mount: Deactivated successfully. Sep 12 17:42:32.315792 kubelet[2895]: I0912 17:42:32.314768 2895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e952190-a867-4e80-a571-c82f9ac73a21-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0e952190-a867-4e80-a571-c82f9ac73a21" (UID: "0e952190-a867-4e80-a571-c82f9ac73a21"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:42:32.315886 kubelet[2895]: I0912 17:42:32.315873 2895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e952190-a867-4e80-a571-c82f9ac73a21-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0e952190-a867-4e80-a571-c82f9ac73a21" (UID: "0e952190-a867-4e80-a571-c82f9ac73a21"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:42:32.316492 kubelet[2895]: I0912 17:42:32.314571 2895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e952190-a867-4e80-a571-c82f9ac73a21-kube-api-access-77795" (OuterVolumeSpecName: "kube-api-access-77795") pod "0e952190-a867-4e80-a571-c82f9ac73a21" (UID: "0e952190-a867-4e80-a571-c82f9ac73a21"). InnerVolumeSpecName "kube-api-access-77795". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:42:32.388814 kubelet[2895]: I0912 17:42:32.388302 2895 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e952190-a867-4e80-a571-c82f9ac73a21-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:42:32.388814 kubelet[2895]: I0912 17:42:32.388334 2895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77795\" (UniqueName: \"kubernetes.io/projected/0e952190-a867-4e80-a571-c82f9ac73a21-kube-api-access-77795\") on node \"localhost\" DevicePath \"\"" Sep 12 17:42:32.388814 kubelet[2895]: I0912 17:42:32.388345 2895 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e952190-a867-4e80-a571-c82f9ac73a21-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:42:32.497747 kubelet[2895]: I0912 17:42:32.497692 2895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:32.577708 kubelet[2895]: I0912 17:42:32.569439 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-28p47" podStartSLOduration=3.26501474 podStartE2EDuration="19.553391603s" podCreationTimestamp="2025-09-12 17:42:13 +0000 UTC" firstStartedPulling="2025-09-12 17:42:13.930635983 +0000 UTC m=+15.881315013" lastFinishedPulling="2025-09-12 17:42:30.219012844 +0000 UTC m=+32.169691876" observedRunningTime="2025-09-12 17:42:31.714679456 +0000 UTC m=+33.665358489" watchObservedRunningTime="2025-09-12 17:42:32.553391603 +0000 UTC m=+34.504070637" Sep 12 17:42:32.709368 kubelet[2895]: I0912 17:42:32.709323 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc-whisker-backend-key-pair\") pod \"whisker-55c54866b8-bdbz9\" (UID: \"0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc\") " pod="calico-system/whisker-55c54866b8-bdbz9" Sep 12 17:42:32.709368 kubelet[2895]: I0912 17:42:32.709379 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc-whisker-ca-bundle\") pod \"whisker-55c54866b8-bdbz9\" (UID: \"0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc\") " pod="calico-system/whisker-55c54866b8-bdbz9" Sep 12 17:42:32.709532 kubelet[2895]: I0912 17:42:32.709406 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dfzd\" (UniqueName: \"kubernetes.io/projected/0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc-kube-api-access-7dfzd\") pod \"whisker-55c54866b8-bdbz9\" (UID: \"0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc\") " pod="calico-system/whisker-55c54866b8-bdbz9" Sep 12 17:42:32.931505 containerd[1641]: time="2025-09-12T17:42:32.931426897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c54866b8-bdbz9,Uid:0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc,Namespace:calico-system,Attempt:0,}" Sep 12 17:42:33.065803 systemd-networkd[1285]: calidb7dd49ba12: Link UP Sep 12 17:42:33.065925 systemd-networkd[1285]: calidb7dd49ba12: Gained carrier Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:32.960 [INFO][4222] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:32.975 [INFO][4222] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--55c54866b8--bdbz9-eth0 whisker-55c54866b8- calico-system 0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc 899 0 2025-09-12 17:42:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55c54866b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-55c54866b8-bdbz9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidb7dd49ba12 [] [] }} ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Namespace="calico-system" Pod="whisker-55c54866b8-bdbz9" WorkloadEndpoint="localhost-k8s-whisker--55c54866b8--bdbz9-" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:32.975 [INFO][4222] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Namespace="calico-system" Pod="whisker-55c54866b8-bdbz9" WorkloadEndpoint="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.004 [INFO][4233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" HandleID="k8s-pod-network.d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Workload="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.004 [INFO][4233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" HandleID="k8s-pod-network.d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Workload="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-55c54866b8-bdbz9", "timestamp":"2025-09-12 17:42:33.00417377 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.004 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.004 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.004 [INFO][4233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.013 [INFO][4233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" host="localhost" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.032 [INFO][4233] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.035 [INFO][4233] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.036 [INFO][4233] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.038 [INFO][4233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.038 [INFO][4233] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" host="localhost" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.040 [INFO][4233] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390 Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.044 [INFO][4233] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" host="localhost" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.050 [INFO][4233] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" host="localhost" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.050 [INFO][4233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" host="localhost" Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.050 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:33.078866 containerd[1641]: 2025-09-12 17:42:33.050 [INFO][4233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" HandleID="k8s-pod-network.d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Workload="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" Sep 12 17:42:33.081570 containerd[1641]: 2025-09-12 17:42:33.052 [INFO][4222] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Namespace="calico-system" Pod="whisker-55c54866b8-bdbz9" WorkloadEndpoint="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55c54866b8--bdbz9-eth0", GenerateName:"whisker-55c54866b8-", Namespace:"calico-system", SelfLink:"", UID:"0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c54866b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-55c54866b8-bdbz9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb7dd49ba12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:33.081570 containerd[1641]: 2025-09-12 17:42:33.053 [INFO][4222] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Namespace="calico-system" Pod="whisker-55c54866b8-bdbz9" WorkloadEndpoint="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" Sep 12 17:42:33.081570 containerd[1641]: 2025-09-12 17:42:33.053 [INFO][4222] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb7dd49ba12 ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Namespace="calico-system" Pod="whisker-55c54866b8-bdbz9" WorkloadEndpoint="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" Sep 12 17:42:33.081570 containerd[1641]: 2025-09-12 17:42:33.063 [INFO][4222] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Namespace="calico-system" Pod="whisker-55c54866b8-bdbz9" WorkloadEndpoint="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" Sep 12 17:42:33.081570 containerd[1641]: 2025-09-12 17:42:33.066 [INFO][4222] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Namespace="calico-system" Pod="whisker-55c54866b8-bdbz9" WorkloadEndpoint="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55c54866b8--bdbz9-eth0", GenerateName:"whisker-55c54866b8-", Namespace:"calico-system", SelfLink:"", UID:"0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c54866b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390", Pod:"whisker-55c54866b8-bdbz9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb7dd49ba12", MAC:"e2:56:90:1b:63:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:33.081570 containerd[1641]: 2025-09-12 17:42:33.076 [INFO][4222] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390" Namespace="calico-system" Pod="whisker-55c54866b8-bdbz9" WorkloadEndpoint="localhost-k8s-whisker--55c54866b8--bdbz9-eth0" Sep 12 17:42:33.110130 containerd[1641]: time="2025-09-12T17:42:33.107768386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:33.110130 containerd[1641]: time="2025-09-12T17:42:33.108446201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:33.110130 containerd[1641]: time="2025-09-12T17:42:33.108458246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:33.113237 containerd[1641]: time="2025-09-12T17:42:33.113206309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:33.129806 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:33.155931 containerd[1641]: time="2025-09-12T17:42:33.155904979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c54866b8-bdbz9,Uid:0eeaaa7e-12c4-4a20-914f-4fc7ed01b8bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390\"" Sep 12 17:42:33.174782 containerd[1641]: time="2025-09-12T17:42:33.174765008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:42:34.188135 kubelet[2895]: I0912 17:42:34.188073 2895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e952190-a867-4e80-a571-c82f9ac73a21" path="/var/lib/kubelet/pods/0e952190-a867-4e80-a571-c82f9ac73a21/volumes" Sep 12 17:42:34.322343 kubelet[2895]: I0912 17:42:34.322316 2895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:34.722024 containerd[1641]: time="2025-09-12T17:42:34.721995231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:34.723368 containerd[1641]: time="2025-09-12T17:42:34.723289929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:42:34.724700 containerd[1641]: time="2025-09-12T17:42:34.724475021Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:34.725221 containerd[1641]: time="2025-09-12T17:42:34.725199565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:34.726446 containerd[1641]: time="2025-09-12T17:42:34.726415783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.551502199s" Sep 12 17:42:34.726446 containerd[1641]: time="2025-09-12T17:42:34.726442992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:42:34.729723 containerd[1641]: time="2025-09-12T17:42:34.729695906Z" level=info msg="CreateContainer within sandbox \"d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:42:34.741438 containerd[1641]: time="2025-09-12T17:42:34.741406921Z" level=info msg="CreateContainer within sandbox \"d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d146dce9af5728c51aed0ac16d368ebb68cf34459da64b394575df5302ba14ae\"" Sep 12 17:42:34.742551 containerd[1641]: time="2025-09-12T17:42:34.742440019Z" level=info msg="StartContainer for \"d146dce9af5728c51aed0ac16d368ebb68cf34459da64b394575df5302ba14ae\"" Sep 12 17:42:34.802888 containerd[1641]: time="2025-09-12T17:42:34.802169051Z" level=info msg="StartContainer for \"d146dce9af5728c51aed0ac16d368ebb68cf34459da64b394575df5302ba14ae\" returns successfully" Sep 12 17:42:34.802888 containerd[1641]: time="2025-09-12T17:42:34.802817682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:42:35.040348 systemd-networkd[1285]: calidb7dd49ba12: Gained IPv6LL Sep 12 17:42:36.929090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249716537.mount: Deactivated successfully. Sep 12 17:42:37.180877 containerd[1641]: time="2025-09-12T17:42:37.180265088Z" level=info msg="StopPodSandbox for \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\"" Sep 12 17:42:37.182397 containerd[1641]: time="2025-09-12T17:42:37.182361202Z" level=info msg="StopPodSandbox for \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\"" Sep 12 17:42:37.241726 containerd[1641]: time="2025-09-12T17:42:37.241683896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:42:37.241823 containerd[1641]: time="2025-09-12T17:42:37.241762610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:37.249851 containerd[1641]: time="2025-09-12T17:42:37.243003949Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:37.250410 containerd[1641]: time="2025-09-12T17:42:37.250371790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:37.251181 containerd[1641]: time="2025-09-12T17:42:37.251159058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.44832406s" Sep 12 17:42:37.251181 containerd[1641]: time="2025-09-12T17:42:37.251181171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:42:37.254547 containerd[1641]: time="2025-09-12T17:42:37.254347983Z" level=info msg="CreateContainer within sandbox \"d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:42:37.267068 containerd[1641]: time="2025-09-12T17:42:37.266988000Z" level=info msg="CreateContainer within sandbox \"d6ce17e20b0b09f57ec0a842550ae19abcdd19ade8a6bbbfb5dc7b939f2e2390\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4271a1d9dd4247e2bbb428d6a86233e53ed8753fa5d50f683293c4f3f194ce1a\"" Sep 12 17:42:37.267697 containerd[1641]: time="2025-09-12T17:42:37.267499986Z" level=info msg="StartContainer for \"4271a1d9dd4247e2bbb428d6a86233e53ed8753fa5d50f683293c4f3f194ce1a\"" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.224 [INFO][4480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.224 [INFO][4480] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" iface="eth0" netns="/var/run/netns/cni-a96d288e-e3ed-d37b-2eee-931fc588b5fc" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.224 [INFO][4480] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" iface="eth0" netns="/var/run/netns/cni-a96d288e-e3ed-d37b-2eee-931fc588b5fc" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.225 [INFO][4480] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" iface="eth0" netns="/var/run/netns/cni-a96d288e-e3ed-d37b-2eee-931fc588b5fc" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.225 [INFO][4480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.225 [INFO][4480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.246 [INFO][4497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" HandleID="k8s-pod-network.341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.246 [INFO][4497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.247 [INFO][4497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.254 [WARNING][4497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" HandleID="k8s-pod-network.341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.254 [INFO][4497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" HandleID="k8s-pod-network.341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.259 [INFO][4497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:37.271147 containerd[1641]: 2025-09-12 17:42:37.263 [INFO][4480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:42:37.274727 containerd[1641]: time="2025-09-12T17:42:37.273367310Z" level=info msg="TearDown network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\" successfully" Sep 12 17:42:37.274727 containerd[1641]: time="2025-09-12T17:42:37.273399961Z" level=info msg="StopPodSandbox for \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\" returns successfully" Sep 12 17:42:37.274837 systemd[1]: run-netns-cni\x2da96d288e\x2de3ed\x2dd37b\x2d2eee\x2d931fc588b5fc.mount: Deactivated successfully. Sep 12 17:42:37.276154 containerd[1641]: time="2025-09-12T17:42:37.276117809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7mxql,Uid:21242ab1-1a34-4925-bcf3-2c7f923b75c1,Namespace:kube-system,Attempt:1,}" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.250 [INFO][4484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.250 [INFO][4484] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" iface="eth0" netns="/var/run/netns/cni-99893b72-a981-bc51-09e6-0e84d365e856" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.252 [INFO][4484] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" iface="eth0" netns="/var/run/netns/cni-99893b72-a981-bc51-09e6-0e84d365e856" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.253 [INFO][4484] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" iface="eth0" netns="/var/run/netns/cni-99893b72-a981-bc51-09e6-0e84d365e856" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.253 [INFO][4484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.253 [INFO][4484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.292 [INFO][4505] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" HandleID="k8s-pod-network.9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.293 [INFO][4505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.293 [INFO][4505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.299 [WARNING][4505] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" HandleID="k8s-pod-network.9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.299 [INFO][4505] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" HandleID="k8s-pod-network.9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.300 [INFO][4505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:37.303781 containerd[1641]: 2025-09-12 17:42:37.302 [INFO][4484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:42:37.305445 containerd[1641]: time="2025-09-12T17:42:37.303840350Z" level=info msg="TearDown network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\" successfully" Sep 12 17:42:37.305445 containerd[1641]: time="2025-09-12T17:42:37.303857698Z" level=info msg="StopPodSandbox for \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\" returns successfully" Sep 12 17:42:37.305445 containerd[1641]: time="2025-09-12T17:42:37.305158725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfb6749f-dhd9m,Uid:5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:42:37.381191 containerd[1641]: time="2025-09-12T17:42:37.381166191Z" level=info msg="StartContainer for \"4271a1d9dd4247e2bbb428d6a86233e53ed8753fa5d50f683293c4f3f194ce1a\" returns successfully" Sep 12 17:42:37.396417 systemd-networkd[1285]: cali61c8a5c4283: Link UP Sep 12 17:42:37.396835 systemd-networkd[1285]: cali61c8a5c4283: Gained carrier Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.316 [INFO][4522] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.325 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0 coredns-7c65d6cfc9- kube-system 21242ab1-1a34-4925-bcf3-2c7f923b75c1 922 0 2025-09-12 17:42:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-7mxql eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali61c8a5c4283 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7mxql" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7mxql-" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.325 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7mxql" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.360 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" HandleID="k8s-pod-network.3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.360 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" HandleID="k8s-pod-network.3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-7mxql", "timestamp":"2025-09-12 17:42:37.360323979 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.360 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.360 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.360 [INFO][4560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.365 [INFO][4560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" host="localhost" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.370 [INFO][4560] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.373 [INFO][4560] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.375 [INFO][4560] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.376 [INFO][4560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.376 [INFO][4560] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" host="localhost" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.378 [INFO][4560] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2 Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.381 [INFO][4560] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" host="localhost" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.387 [INFO][4560] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" host="localhost" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.387 [INFO][4560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" host="localhost" Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.387 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:37.409835 containerd[1641]: 2025-09-12 17:42:37.387 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" HandleID="k8s-pod-network.3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.412322 containerd[1641]: 2025-09-12 17:42:37.392 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7mxql" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"21242ab1-1a34-4925-bcf3-2c7f923b75c1", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-7mxql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61c8a5c4283", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:37.412322 containerd[1641]: 2025-09-12 17:42:37.392 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7mxql" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.412322 containerd[1641]: 2025-09-12 17:42:37.393 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61c8a5c4283 ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7mxql" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.412322 containerd[1641]: 2025-09-12 17:42:37.395 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7mxql" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.412322 containerd[1641]: 2025-09-12 17:42:37.395 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7mxql" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"21242ab1-1a34-4925-bcf3-2c7f923b75c1", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2", Pod:"coredns-7c65d6cfc9-7mxql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61c8a5c4283", MAC:"96:b3:af:a4:a7:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:37.412322 containerd[1641]: 2025-09-12 17:42:37.404 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7mxql" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:42:37.437614 containerd[1641]: time="2025-09-12T17:42:37.437428562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:37.437614 containerd[1641]: time="2025-09-12T17:42:37.437471622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:37.437919 containerd[1641]: time="2025-09-12T17:42:37.437484580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:37.437919 containerd[1641]: time="2025-09-12T17:42:37.437540572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:37.456796 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:37.484464 containerd[1641]: time="2025-09-12T17:42:37.484438936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7mxql,Uid:21242ab1-1a34-4925-bcf3-2c7f923b75c1,Namespace:kube-system,Attempt:1,} returns sandbox id \"3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2\"" Sep 12 17:42:37.496053 containerd[1641]: time="2025-09-12T17:42:37.495554690Z" level=info msg="CreateContainer within sandbox \"3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:42:37.506383 systemd[1]: run-netns-cni\x2d99893b72\x2da981\x2dbc51\x2d09e6\x2d0e84d365e856.mount: Deactivated successfully. Sep 12 17:42:37.515671 systemd-networkd[1285]: caliac77cadb0ea: Link UP Sep 12 17:42:37.515785 systemd-networkd[1285]: caliac77cadb0ea: Gained carrier Sep 12 17:42:37.530890 kubelet[2895]: I0912 17:42:37.527973 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55c54866b8-bdbz9" podStartSLOduration=1.449407 podStartE2EDuration="5.527712392s" podCreationTimestamp="2025-09-12 17:42:32 +0000 UTC" firstStartedPulling="2025-09-12 17:42:33.173838291 +0000 UTC m=+35.124517320" lastFinishedPulling="2025-09-12 17:42:37.252143681 +0000 UTC m=+39.202822712" observedRunningTime="2025-09-12 17:42:37.520338162 +0000 UTC m=+39.471017202" watchObservedRunningTime="2025-09-12 17:42:37.527712392 +0000 UTC m=+39.478391431" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.349 [INFO][4547] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.367 [INFO][4547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0 calico-apiserver-6dfb6749f- calico-apiserver 5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe 923 0 2025-09-12 17:42:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dfb6749f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dfb6749f-dhd9m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac77cadb0ea [] [] }} ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-dhd9m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.368 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-dhd9m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.410 [INFO][4577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" HandleID="k8s-pod-network.8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.410 [INFO][4577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" HandleID="k8s-pod-network.8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dfb6749f-dhd9m", "timestamp":"2025-09-12 17:42:37.4088625 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.410 [INFO][4577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.411 [INFO][4577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.411 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.465 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" host="localhost" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.481 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.486 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.489 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.490 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.490 [INFO][4577] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" host="localhost" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.491 [INFO][4577] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.492 [INFO][4577] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" host="localhost" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.497 [INFO][4577] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" host="localhost" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.498 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" host="localhost" Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.498 [INFO][4577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:37.541670 containerd[1641]: 2025-09-12 17:42:37.498 [INFO][4577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" HandleID="k8s-pod-network.8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.543236 containerd[1641]: 2025-09-12 17:42:37.511 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-dhd9m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0", GenerateName:"calico-apiserver-6dfb6749f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfb6749f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dfb6749f-dhd9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac77cadb0ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:37.543236 containerd[1641]: 2025-09-12 17:42:37.512 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-dhd9m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.543236 containerd[1641]: 2025-09-12 17:42:37.512 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac77cadb0ea ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-dhd9m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.543236 containerd[1641]: 2025-09-12 17:42:37.516 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-dhd9m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.543236 containerd[1641]: 2025-09-12 17:42:37.517 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-dhd9m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0", GenerateName:"calico-apiserver-6dfb6749f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfb6749f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f", Pod:"calico-apiserver-6dfb6749f-dhd9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac77cadb0ea", MAC:"7e:bd:78:9f:31:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:37.543236 containerd[1641]: 2025-09-12 17:42:37.531 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-dhd9m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:42:37.549625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986595167.mount: Deactivated successfully. Sep 12 17:42:37.552712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862402042.mount: Deactivated successfully. Sep 12 17:42:37.553736 containerd[1641]: time="2025-09-12T17:42:37.553634873Z" level=info msg="CreateContainer within sandbox \"3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"363c276d0b6fd64a1a136fbf91a7ec8c35841eca6d0f2a6e4ffd476b76319e7c\"" Sep 12 17:42:37.554277 containerd[1641]: time="2025-09-12T17:42:37.554264508Z" level=info msg="StartContainer for \"363c276d0b6fd64a1a136fbf91a7ec8c35841eca6d0f2a6e4ffd476b76319e7c\"" Sep 12 17:42:37.571870 containerd[1641]: time="2025-09-12T17:42:37.571719408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:37.571870 containerd[1641]: time="2025-09-12T17:42:37.571759990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:37.571870 containerd[1641]: time="2025-09-12T17:42:37.571775452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:37.571870 containerd[1641]: time="2025-09-12T17:42:37.571841703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:37.605894 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:37.621537 containerd[1641]: time="2025-09-12T17:42:37.621155395Z" level=info msg="StartContainer for \"363c276d0b6fd64a1a136fbf91a7ec8c35841eca6d0f2a6e4ffd476b76319e7c\" returns successfully" Sep 12 17:42:37.633974 containerd[1641]: time="2025-09-12T17:42:37.633905184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfb6749f-dhd9m,Uid:5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f\"" Sep 12 17:42:37.638060 containerd[1641]: time="2025-09-12T17:42:37.637919976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:42:38.182217 containerd[1641]: time="2025-09-12T17:42:38.182117931Z" level=info msg="StopPodSandbox for \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\"" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.216 [INFO][4748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.216 [INFO][4748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" iface="eth0" netns="/var/run/netns/cni-c79746f3-cb71-c005-bcab-a82e233c6d58" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.217 [INFO][4748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" iface="eth0" netns="/var/run/netns/cni-c79746f3-cb71-c005-bcab-a82e233c6d58" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.217 [INFO][4748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" iface="eth0" netns="/var/run/netns/cni-c79746f3-cb71-c005-bcab-a82e233c6d58" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.217 [INFO][4748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.217 [INFO][4748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.232 [INFO][4755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" HandleID="k8s-pod-network.a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.232 [INFO][4755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.232 [INFO][4755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.235 [WARNING][4755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" HandleID="k8s-pod-network.a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.235 [INFO][4755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" HandleID="k8s-pod-network.a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.236 [INFO][4755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:38.238594 containerd[1641]: 2025-09-12 17:42:38.237 [INFO][4748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:38.239257 containerd[1641]: time="2025-09-12T17:42:38.238942022Z" level=info msg="TearDown network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\" successfully" Sep 12 17:42:38.239257 containerd[1641]: time="2025-09-12T17:42:38.238960343Z" level=info msg="StopPodSandbox for \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\" returns successfully" Sep 12 17:42:38.239372 containerd[1641]: time="2025-09-12T17:42:38.239350825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfb6749f-f67rq,Uid:5b939991-56b9-412e-bee1-03c9df66c4a5,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:42:38.339570 systemd-networkd[1285]: cali0d71fd4b0bf: Link UP Sep 12 17:42:38.340456 systemd-networkd[1285]: cali0d71fd4b0bf: Gained carrier Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.284 [INFO][4765] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.294 [INFO][4765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0 calico-apiserver-6dfb6749f- calico-apiserver 5b939991-56b9-412e-bee1-03c9df66c4a5 944 0 2025-09-12 17:42:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dfb6749f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dfb6749f-f67rq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0d71fd4b0bf [] [] }} ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-f67rq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.294 [INFO][4765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-f67rq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.314 [INFO][4775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" HandleID="k8s-pod-network.38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.314 [INFO][4775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" HandleID="k8s-pod-network.38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dfb6749f-f67rq", "timestamp":"2025-09-12 17:42:38.314359752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.314 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.314 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.314 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.318 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" host="localhost" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.321 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.324 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.325 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.326 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.326 [INFO][4775] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" host="localhost" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.327 [INFO][4775] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055 Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.332 [INFO][4775] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" host="localhost" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.336 [INFO][4775] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" host="localhost" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.336 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" host="localhost" Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.336 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:38.359102 containerd[1641]: 2025-09-12 17:42:38.336 [INFO][4775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" HandleID="k8s-pod-network.38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.361269 containerd[1641]: 2025-09-12 17:42:38.337 [INFO][4765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-f67rq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0", GenerateName:"calico-apiserver-6dfb6749f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b939991-56b9-412e-bee1-03c9df66c4a5", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfb6749f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dfb6749f-f67rq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d71fd4b0bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:38.361269 containerd[1641]: 2025-09-12 17:42:38.337 [INFO][4765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-f67rq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.361269 containerd[1641]: 2025-09-12 17:42:38.337 [INFO][4765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d71fd4b0bf ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-f67rq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.361269 containerd[1641]: 2025-09-12 17:42:38.340 [INFO][4765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-f67rq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.361269 containerd[1641]: 2025-09-12 17:42:38.341 [INFO][4765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-f67rq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0", GenerateName:"calico-apiserver-6dfb6749f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b939991-56b9-412e-bee1-03c9df66c4a5", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfb6749f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055", Pod:"calico-apiserver-6dfb6749f-f67rq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d71fd4b0bf", MAC:"c6:66:04:b5:e0:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:38.361269 containerd[1641]: 2025-09-12 17:42:38.355 [INFO][4765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055" Namespace="calico-apiserver" Pod="calico-apiserver-6dfb6749f-f67rq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:38.370994 containerd[1641]: time="2025-09-12T17:42:38.370929635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:38.371530 containerd[1641]: time="2025-09-12T17:42:38.371352913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:38.371530 containerd[1641]: time="2025-09-12T17:42:38.371461036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:38.371620 containerd[1641]: time="2025-09-12T17:42:38.371521151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:38.391658 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:38.417433 containerd[1641]: time="2025-09-12T17:42:38.417386093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dfb6749f-f67rq,Uid:5b939991-56b9-412e-bee1-03c9df66c4a5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055\"" Sep 12 17:42:38.500181 systemd[1]: run-netns-cni\x2dc79746f3\x2dcb71\x2dc005\x2dbcab\x2da82e233c6d58.mount: Deactivated successfully. Sep 12 17:42:38.525861 kubelet[2895]: I0912 17:42:38.525636 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7mxql" podStartSLOduration=35.525623184 podStartE2EDuration="35.525623184s" podCreationTimestamp="2025-09-12 17:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:38.525482726 +0000 UTC m=+40.476161765" watchObservedRunningTime="2025-09-12 17:42:38.525623184 +0000 UTC m=+40.476302219" Sep 12 17:42:38.865789 kubelet[2895]: I0912 17:42:38.865075 2895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:39.003754 systemd-networkd[1285]: cali61c8a5c4283: Gained IPv6LL Sep 12 17:42:39.179879 containerd[1641]: time="2025-09-12T17:42:39.179753280Z" level=info msg="StopPodSandbox for \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\"" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.208 [INFO][4862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.209 [INFO][4862] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" iface="eth0" netns="/var/run/netns/cni-674823cc-259b-e45e-5347-4227c003ba09" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.209 [INFO][4862] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" iface="eth0" netns="/var/run/netns/cni-674823cc-259b-e45e-5347-4227c003ba09" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.209 [INFO][4862] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" iface="eth0" netns="/var/run/netns/cni-674823cc-259b-e45e-5347-4227c003ba09" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.209 [INFO][4862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.209 [INFO][4862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.222 [INFO][4869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" HandleID="k8s-pod-network.892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.222 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.223 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.227 [WARNING][4869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" HandleID="k8s-pod-network.892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.227 [INFO][4869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" HandleID="k8s-pod-network.892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.228 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:39.230594 containerd[1641]: 2025-09-12 17:42:39.229 [INFO][4862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:42:39.234104 containerd[1641]: time="2025-09-12T17:42:39.233237303Z" level=info msg="TearDown network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\" successfully" Sep 12 17:42:39.234104 containerd[1641]: time="2025-09-12T17:42:39.233262459Z" level=info msg="StopPodSandbox for \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\" returns successfully" Sep 12 17:42:39.232962 systemd[1]: run-netns-cni\x2d674823cc\x2d259b\x2de45e\x2d5347\x2d4227c003ba09.mount: Deactivated successfully. Sep 12 17:42:39.234732 containerd[1641]: time="2025-09-12T17:42:39.234386597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2clqw,Uid:66750a83-1e25-4052-acac-1ee5648a6796,Namespace:kube-system,Attempt:1,}" Sep 12 17:42:39.323027 systemd-networkd[1285]: cali7ff2e37bc47: Link UP Sep 12 17:42:39.323146 systemd-networkd[1285]: cali7ff2e37bc47: Gained carrier Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.260 [INFO][4875] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.266 [INFO][4875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0 coredns-7c65d6cfc9- kube-system 66750a83-1e25-4052-acac-1ee5648a6796 970 0 2025-09-12 17:42:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-2clqw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ff2e37bc47 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2clqw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2clqw-" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.266 [INFO][4875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2clqw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.288 [INFO][4887] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" HandleID="k8s-pod-network.3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.288 [INFO][4887] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" HandleID="k8s-pod-network.3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-2clqw", "timestamp":"2025-09-12 17:42:39.288724317 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.288 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.288 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.288 [INFO][4887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.293 [INFO][4887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" host="localhost" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.295 [INFO][4887] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.297 [INFO][4887] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.298 [INFO][4887] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.305 [INFO][4887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.305 [INFO][4887] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" host="localhost" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.306 [INFO][4887] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8 Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.312 [INFO][4887] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" host="localhost" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.316 [INFO][4887] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" host="localhost" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.316 [INFO][4887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" host="localhost" Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.316 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:39.338808 containerd[1641]: 2025-09-12 17:42:39.316 [INFO][4887] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" HandleID="k8s-pod-network.3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.339206 containerd[1641]: 2025-09-12 17:42:39.319 [INFO][4875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2clqw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"66750a83-1e25-4052-acac-1ee5648a6796", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-2clqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff2e37bc47", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:39.339206 containerd[1641]: 2025-09-12 17:42:39.320 [INFO][4875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2clqw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.339206 containerd[1641]: 2025-09-12 17:42:39.320 [INFO][4875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ff2e37bc47 ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2clqw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.339206 containerd[1641]: 2025-09-12 17:42:39.323 [INFO][4875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2clqw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.339206 containerd[1641]: 2025-09-12 17:42:39.324 [INFO][4875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2clqw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"66750a83-1e25-4052-acac-1ee5648a6796", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8", Pod:"coredns-7c65d6cfc9-2clqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff2e37bc47", MAC:"ee:77:dd:c1:d6:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:39.339206 containerd[1641]: 2025-09-12 17:42:39.335 [INFO][4875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2clqw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:42:39.382328 containerd[1641]: time="2025-09-12T17:42:39.382255603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:39.382328 containerd[1641]: time="2025-09-12T17:42:39.382294564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:39.382328 containerd[1641]: time="2025-09-12T17:42:39.382308448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:39.382502 containerd[1641]: time="2025-09-12T17:42:39.382379095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:39.388945 systemd-networkd[1285]: caliac77cadb0ea: Gained IPv6LL Sep 12 17:42:39.422709 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:39.448678 containerd[1641]: time="2025-09-12T17:42:39.447660254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2clqw,Uid:66750a83-1e25-4052-acac-1ee5648a6796,Namespace:kube-system,Attempt:1,} returns sandbox id \"3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8\"" Sep 12 17:42:39.458681 containerd[1641]: time="2025-09-12T17:42:39.458585026Z" level=info msg="CreateContainer within sandbox \"3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:42:39.707743 systemd-networkd[1285]: cali0d71fd4b0bf: Gained IPv6LL Sep 12 17:42:39.762898 containerd[1641]: time="2025-09-12T17:42:39.762626065Z" level=info msg="CreateContainer within sandbox \"3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac89afebbb91ca45382a32efe6b07802c0ed2299e06b95eb452f2413dcf2c278\"" Sep 12 17:42:39.763620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121773860.mount: Deactivated successfully. Sep 12 17:42:39.764412 containerd[1641]: time="2025-09-12T17:42:39.764397512Z" level=info msg="StartContainer for \"ac89afebbb91ca45382a32efe6b07802c0ed2299e06b95eb452f2413dcf2c278\"" Sep 12 17:42:39.813549 containerd[1641]: time="2025-09-12T17:42:39.813528668Z" level=info msg="StartContainer for \"ac89afebbb91ca45382a32efe6b07802c0ed2299e06b95eb452f2413dcf2c278\" returns successfully" Sep 12 17:42:39.876674 kernel: bpftool[5004]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:42:40.277066 systemd-networkd[1285]: vxlan.calico: Link UP Sep 12 17:42:40.277072 systemd-networkd[1285]: vxlan.calico: Gained carrier Sep 12 17:42:40.411710 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:40.428402 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:40.411730 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:40.578700 kubelet[2895]: I0912 17:42:40.578533 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2clqw" podStartSLOduration=37.578516367 podStartE2EDuration="37.578516367s" podCreationTimestamp="2025-09-12 17:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:42:40.554520856 +0000 UTC m=+42.505199895" watchObservedRunningTime="2025-09-12 17:42:40.578516367 +0000 UTC m=+42.529195406" Sep 12 17:42:40.841183 containerd[1641]: time="2025-09-12T17:42:40.841014751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:40.847848 containerd[1641]: time="2025-09-12T17:42:40.847807667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:42:40.860533 containerd[1641]: time="2025-09-12T17:42:40.860481628Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:40.867458 containerd[1641]: time="2025-09-12T17:42:40.867431256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:40.868719 containerd[1641]: time="2025-09-12T17:42:40.868655411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.229916069s" Sep 12 17:42:40.868719 containerd[1641]: time="2025-09-12T17:42:40.868677570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:42:40.871848 containerd[1641]: time="2025-09-12T17:42:40.869866726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:42:40.885400 containerd[1641]: time="2025-09-12T17:42:40.885375019Z" level=info msg="CreateContainer within sandbox \"8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:42:40.995046 containerd[1641]: time="2025-09-12T17:42:40.995020368Z" level=info msg="CreateContainer within sandbox \"8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6e19806566750cbdc0a357585814258356f4a041039c2fb823140a81abc01d49\"" Sep 12 17:42:40.995911 containerd[1641]: time="2025-09-12T17:42:40.995670569Z" level=info msg="StartContainer for \"6e19806566750cbdc0a357585814258356f4a041039c2fb823140a81abc01d49\"" Sep 12 17:42:41.061257 containerd[1641]: time="2025-09-12T17:42:41.061234068Z" level=info msg="StartContainer for \"6e19806566750cbdc0a357585814258356f4a041039c2fb823140a81abc01d49\" returns successfully" Sep 12 17:42:41.115747 systemd-networkd[1285]: cali7ff2e37bc47: Gained IPv6LL Sep 12 17:42:41.180715 containerd[1641]: time="2025-09-12T17:42:41.180488327Z" level=info msg="StopPodSandbox for \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\"" Sep 12 17:42:41.180922 containerd[1641]: time="2025-09-12T17:42:41.180803307Z" level=info msg="StopPodSandbox for \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\"" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.295 [INFO][5197] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5197] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" iface="eth0" netns="/var/run/netns/cni-3873edad-6950-c54b-ad38-7f897ebcc116" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5197] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" iface="eth0" netns="/var/run/netns/cni-3873edad-6950-c54b-ad38-7f897ebcc116" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5197] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" iface="eth0" netns="/var/run/netns/cni-3873edad-6950-c54b-ad38-7f897ebcc116" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5197] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.327 [INFO][5212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" HandleID="k8s-pod-network.8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.327 [INFO][5212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.327 [INFO][5212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.333 [WARNING][5212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" HandleID="k8s-pod-network.8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.333 [INFO][5212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" HandleID="k8s-pod-network.8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.334 [INFO][5212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:41.345398 containerd[1641]: 2025-09-12 17:42:41.338 [INFO][5197] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:42:41.359532 containerd[1641]: time="2025-09-12T17:42:41.345552199Z" level=info msg="TearDown network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\" successfully" Sep 12 17:42:41.359532 containerd[1641]: time="2025-09-12T17:42:41.345570638Z" level=info msg="StopPodSandbox for \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\" returns successfully" Sep 12 17:42:41.359532 containerd[1641]: time="2025-09-12T17:42:41.349198318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8d94b68-lfb95,Uid:24866c61-e05d-457b-a1e7-0f1c845f8a0f,Namespace:calico-system,Attempt:1,}" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.296 [INFO][5196] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5196] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" iface="eth0" netns="/var/run/netns/cni-c570ef4f-55e8-2139-0b9d-111a7350c9c0" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5196] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" iface="eth0" netns="/var/run/netns/cni-c570ef4f-55e8-2139-0b9d-111a7350c9c0" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5196] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" iface="eth0" netns="/var/run/netns/cni-c570ef4f-55e8-2139-0b9d-111a7350c9c0" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5196] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.298 [INFO][5196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.335 [INFO][5211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" HandleID="k8s-pod-network.7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.336 [INFO][5211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.336 [INFO][5211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.341 [WARNING][5211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" HandleID="k8s-pod-network.7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.341 [INFO][5211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" HandleID="k8s-pod-network.7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.342 [INFO][5211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:41.359532 containerd[1641]: 2025-09-12 17:42:41.345 [INFO][5196] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:42:41.359532 containerd[1641]: time="2025-09-12T17:42:41.352693998Z" level=info msg="TearDown network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\" successfully" Sep 12 17:42:41.359532 containerd[1641]: time="2025-09-12T17:42:41.352708983Z" level=info msg="StopPodSandbox for \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\" returns successfully" Sep 12 17:42:41.359532 containerd[1641]: time="2025-09-12T17:42:41.353035367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vqsr,Uid:6c2d1714-1041-45d2-9888-c6e010910454,Namespace:calico-system,Attempt:1,}" Sep 12 17:42:41.348465 systemd[1]: run-netns-cni\x2d3873edad\x2d6950\x2dc54b\x2dad38\x2d7f897ebcc116.mount: Deactivated successfully. Sep 12 17:42:41.353779 systemd[1]: run-netns-cni\x2dc570ef4f\x2d55e8\x2d2139\x2d0b9d\x2d111a7350c9c0.mount: Deactivated successfully. Sep 12 17:42:41.439446 containerd[1641]: time="2025-09-12T17:42:41.439068284Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:41.452841 containerd[1641]: time="2025-09-12T17:42:41.452810275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:42:41.455330 containerd[1641]: time="2025-09-12T17:42:41.455308105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 584.76747ms" Sep 12 17:42:41.455396 containerd[1641]: time="2025-09-12T17:42:41.455333225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:42:41.468262 containerd[1641]: time="2025-09-12T17:42:41.468196750Z" level=info msg="CreateContainer within sandbox \"38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:42:41.490510 containerd[1641]: time="2025-09-12T17:42:41.490472217Z" level=info msg="CreateContainer within sandbox \"38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"31c8d40d7e3a81f9cc06e664cac048bafd54a6bbbe1506b73f9693ef75f55434\"" Sep 12 17:42:41.491161 containerd[1641]: time="2025-09-12T17:42:41.491148529Z" level=info msg="StartContainer for \"31c8d40d7e3a81f9cc06e664cac048bafd54a6bbbe1506b73f9693ef75f55434\"" Sep 12 17:42:41.534386 kubelet[2895]: I0912 17:42:41.534291 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dfb6749f-dhd9m" podStartSLOduration=27.301985359 podStartE2EDuration="30.534275858s" podCreationTimestamp="2025-09-12 17:42:11 +0000 UTC" firstStartedPulling="2025-09-12 17:42:37.636832723 +0000 UTC m=+39.587511754" lastFinishedPulling="2025-09-12 17:42:40.86912322 +0000 UTC m=+42.819802253" observedRunningTime="2025-09-12 17:42:41.531700535 +0000 UTC m=+43.482379574" watchObservedRunningTime="2025-09-12 17:42:41.534275858 +0000 UTC m=+43.484954892" Sep 12 17:42:41.603851 systemd-networkd[1285]: cali8a1f14dd3cd: Link UP Sep 12 17:42:41.604847 systemd-networkd[1285]: cali8a1f14dd3cd: Gained carrier Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.475 [INFO][5226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0 calico-kube-controllers-c8d94b68- calico-system 24866c61-e05d-457b-a1e7-0f1c845f8a0f 994 0 2025-09-12 17:42:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c8d94b68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c8d94b68-lfb95 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8a1f14dd3cd [] [] }} ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Namespace="calico-system" Pod="calico-kube-controllers-c8d94b68-lfb95" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.475 [INFO][5226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Namespace="calico-system" Pod="calico-kube-controllers-c8d94b68-lfb95" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.537 [INFO][5251] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" HandleID="k8s-pod-network.66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.538 [INFO][5251] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" HandleID="k8s-pod-network.66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c8d94b68-lfb95", "timestamp":"2025-09-12 17:42:41.537312016 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.539 [INFO][5251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.539 [INFO][5251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.539 [INFO][5251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.550 [INFO][5251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" host="localhost" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.561 [INFO][5251] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.566 [INFO][5251] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.571 [INFO][5251] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.575 [INFO][5251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.575 [INFO][5251] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" host="localhost" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.578 [INFO][5251] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427 Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.583 [INFO][5251] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" host="localhost" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.590 [INFO][5251] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" host="localhost" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.590 [INFO][5251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" host="localhost" Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.590 [INFO][5251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:41.624576 containerd[1641]: 2025-09-12 17:42:41.590 [INFO][5251] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" HandleID="k8s-pod-network.66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.630844 containerd[1641]: 2025-09-12 17:42:41.593 [INFO][5226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Namespace="calico-system" Pod="calico-kube-controllers-c8d94b68-lfb95" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0", GenerateName:"calico-kube-controllers-c8d94b68-", Namespace:"calico-system", SelfLink:"", UID:"24866c61-e05d-457b-a1e7-0f1c845f8a0f", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8d94b68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c8d94b68-lfb95", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a1f14dd3cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:41.630844 containerd[1641]: 2025-09-12 17:42:41.593 [INFO][5226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Namespace="calico-system" Pod="calico-kube-controllers-c8d94b68-lfb95" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.630844 containerd[1641]: 2025-09-12 17:42:41.593 [INFO][5226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a1f14dd3cd ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Namespace="calico-system" Pod="calico-kube-controllers-c8d94b68-lfb95" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.630844 containerd[1641]: 2025-09-12 17:42:41.606 [INFO][5226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Namespace="calico-system" Pod="calico-kube-controllers-c8d94b68-lfb95" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.630844 containerd[1641]: 2025-09-12 17:42:41.606 [INFO][5226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Namespace="calico-system" Pod="calico-kube-controllers-c8d94b68-lfb95" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0", GenerateName:"calico-kube-controllers-c8d94b68-", Namespace:"calico-system", SelfLink:"", UID:"24866c61-e05d-457b-a1e7-0f1c845f8a0f", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8d94b68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427", Pod:"calico-kube-controllers-c8d94b68-lfb95", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a1f14dd3cd", MAC:"8a:ec:12:23:84:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:41.630844 containerd[1641]: 2025-09-12 17:42:41.618 [INFO][5226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427" Namespace="calico-system" Pod="calico-kube-controllers-c8d94b68-lfb95" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:42:41.630467 systemd-networkd[1285]: vxlan.calico: Gained IPv6LL Sep 12 17:42:41.675775 containerd[1641]: time="2025-09-12T17:42:41.675341417Z" level=info msg="StartContainer for \"31c8d40d7e3a81f9cc06e664cac048bafd54a6bbbe1506b73f9693ef75f55434\" returns successfully" Sep 12 17:42:41.732489 containerd[1641]: time="2025-09-12T17:42:41.732021418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:41.732489 containerd[1641]: time="2025-09-12T17:42:41.732345332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:41.732489 containerd[1641]: time="2025-09-12T17:42:41.732359055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:41.732489 containerd[1641]: time="2025-09-12T17:42:41.732446612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:41.757028 systemd-networkd[1285]: cali1e02ad146c9: Link UP Sep 12 17:42:41.757413 systemd-networkd[1285]: cali1e02ad146c9: Gained carrier Sep 12 17:42:41.778740 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.474 [INFO][5238] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6vqsr-eth0 csi-node-driver- calico-system 6c2d1714-1041-45d2-9888-c6e010910454 993 0 2025-09-12 17:42:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6vqsr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1e02ad146c9 [] [] }} ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Namespace="calico-system" Pod="csi-node-driver-6vqsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vqsr-" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.474 [INFO][5238] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Namespace="calico-system" Pod="csi-node-driver-6vqsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.533 [INFO][5253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" HandleID="k8s-pod-network.94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.539 [INFO][5253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" HandleID="k8s-pod-network.94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6vqsr", "timestamp":"2025-09-12 17:42:41.533619275 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.540 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.590 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.590 [INFO][5253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.658 [INFO][5253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" host="localhost" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.670 [INFO][5253] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.687 [INFO][5253] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.694 [INFO][5253] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.697 [INFO][5253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.703 [INFO][5253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" host="localhost" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.708 [INFO][5253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061 Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.721 [INFO][5253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" host="localhost" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.751 [INFO][5253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" host="localhost" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.752 [INFO][5253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" host="localhost" Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.752 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:41.787771 containerd[1641]: 2025-09-12 17:42:41.752 [INFO][5253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" HandleID="k8s-pod-network.94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.789081 containerd[1641]: 2025-09-12 17:42:41.755 [INFO][5238] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Namespace="calico-system" Pod="csi-node-driver-6vqsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vqsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vqsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6c2d1714-1041-45d2-9888-c6e010910454", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6vqsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e02ad146c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:41.789081 containerd[1641]: 2025-09-12 17:42:41.755 [INFO][5238] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Namespace="calico-system" Pod="csi-node-driver-6vqsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.789081 containerd[1641]: 2025-09-12 17:42:41.755 [INFO][5238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e02ad146c9 ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Namespace="calico-system" Pod="csi-node-driver-6vqsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.789081 containerd[1641]: 2025-09-12 17:42:41.757 [INFO][5238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Namespace="calico-system" Pod="csi-node-driver-6vqsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.789081 containerd[1641]: 2025-09-12 17:42:41.757 [INFO][5238] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Namespace="calico-system" Pod="csi-node-driver-6vqsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vqsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vqsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6c2d1714-1041-45d2-9888-c6e010910454", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061", Pod:"csi-node-driver-6vqsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e02ad146c9", MAC:"be:7e:9d:17:7a:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:41.789081 containerd[1641]: 2025-09-12 17:42:41.776 [INFO][5238] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061" Namespace="calico-system" Pod="csi-node-driver-6vqsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:42:41.802927 containerd[1641]: time="2025-09-12T17:42:41.802620230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:41.802927 containerd[1641]: time="2025-09-12T17:42:41.802731462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:41.802927 containerd[1641]: time="2025-09-12T17:42:41.802757613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:41.802927 containerd[1641]: time="2025-09-12T17:42:41.802852717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:41.826898 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:41.840042 containerd[1641]: time="2025-09-12T17:42:41.839965096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vqsr,Uid:6c2d1714-1041-45d2-9888-c6e010910454,Namespace:calico-system,Attempt:1,} returns sandbox id \"94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061\"" Sep 12 17:42:41.848685 containerd[1641]: time="2025-09-12T17:42:41.847487791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:42:41.893732 containerd[1641]: time="2025-09-12T17:42:41.893696084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8d94b68-lfb95,Uid:24866c61-e05d-457b-a1e7-0f1c845f8a0f,Namespace:calico-system,Attempt:1,} returns sandbox id \"66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427\"" Sep 12 17:42:42.205986 containerd[1641]: time="2025-09-12T17:42:42.205796281Z" level=info msg="StopPodSandbox for \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\"" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.261 [INFO][5415] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.261 [INFO][5415] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" iface="eth0" netns="/var/run/netns/cni-9d34ebc0-6cac-33fb-7100-522f487743be" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.262 [INFO][5415] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" iface="eth0" netns="/var/run/netns/cni-9d34ebc0-6cac-33fb-7100-522f487743be" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.263 [INFO][5415] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" iface="eth0" netns="/var/run/netns/cni-9d34ebc0-6cac-33fb-7100-522f487743be" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.263 [INFO][5415] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.263 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.293 [INFO][5424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" HandleID="k8s-pod-network.b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.293 [INFO][5424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.293 [INFO][5424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.298 [WARNING][5424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" HandleID="k8s-pod-network.b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.298 [INFO][5424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" HandleID="k8s-pod-network.b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.300 [INFO][5424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:42.305842 containerd[1641]: 2025-09-12 17:42:42.302 [INFO][5415] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:42.305842 containerd[1641]: time="2025-09-12T17:42:42.305736429Z" level=info msg="TearDown network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\" successfully" Sep 12 17:42:42.305842 containerd[1641]: time="2025-09-12T17:42:42.305767851Z" level=info msg="StopPodSandbox for \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\" returns successfully" Sep 12 17:42:42.305520 systemd[1]: run-netns-cni\x2d9d34ebc0\x2d6cac\x2d33fb\x2d7100\x2d522f487743be.mount: Deactivated successfully. Sep 12 17:42:42.312892 containerd[1641]: time="2025-09-12T17:42:42.311894315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-drjps,Uid:4c40b048-393b-4ff5-8a75-2bc642369f86,Namespace:calico-system,Attempt:1,}" Sep 12 17:42:42.430397 systemd-networkd[1285]: calia95477b9d3c: Link UP Sep 12 17:42:42.430515 systemd-networkd[1285]: calia95477b9d3c: Gained carrier Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.354 [INFO][5432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--drjps-eth0 goldmane-7988f88666- calico-system 4c40b048-393b-4ff5-8a75-2bc642369f86 1011 0 2025-09-12 17:42:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-drjps eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia95477b9d3c [] [] }} ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Namespace="calico-system" Pod="goldmane-7988f88666-drjps" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--drjps-" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.354 [INFO][5432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Namespace="calico-system" Pod="goldmane-7988f88666-drjps" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.391 [INFO][5443] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" HandleID="k8s-pod-network.6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.392 [INFO][5443] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" HandleID="k8s-pod-network.6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-drjps", "timestamp":"2025-09-12 17:42:42.391980183 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.392 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.392 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.392 [INFO][5443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.398 [INFO][5443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" host="localhost" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.401 [INFO][5443] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.404 [INFO][5443] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.405 [INFO][5443] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.409 [INFO][5443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.409 [INFO][5443] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" host="localhost" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.412 [INFO][5443] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.415 [INFO][5443] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" host="localhost" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.420 [INFO][5443] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" host="localhost" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.420 [INFO][5443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" host="localhost" Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.420 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:42.448332 containerd[1641]: 2025-09-12 17:42:42.420 [INFO][5443] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" HandleID="k8s-pod-network.6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.458287 containerd[1641]: 2025-09-12 17:42:42.426 [INFO][5432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Namespace="calico-system" Pod="goldmane-7988f88666-drjps" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--drjps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--drjps-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"4c40b048-393b-4ff5-8a75-2bc642369f86", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-drjps", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia95477b9d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:42.458287 containerd[1641]: 2025-09-12 17:42:42.426 [INFO][5432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Namespace="calico-system" Pod="goldmane-7988f88666-drjps" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.458287 containerd[1641]: 2025-09-12 17:42:42.426 [INFO][5432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia95477b9d3c ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Namespace="calico-system" Pod="goldmane-7988f88666-drjps" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.458287 containerd[1641]: 2025-09-12 17:42:42.431 [INFO][5432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Namespace="calico-system" Pod="goldmane-7988f88666-drjps" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.458287 containerd[1641]: 2025-09-12 17:42:42.431 [INFO][5432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Namespace="calico-system" Pod="goldmane-7988f88666-drjps" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--drjps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--drjps-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"4c40b048-393b-4ff5-8a75-2bc642369f86", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc", Pod:"goldmane-7988f88666-drjps", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia95477b9d3c", MAC:"36:aa:ba:32:c7:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:42.458287 containerd[1641]: 2025-09-12 17:42:42.442 [INFO][5432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc" Namespace="calico-system" Pod="goldmane-7988f88666-drjps" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:42.459816 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:42.460796 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:42.459828 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:42.499517 containerd[1641]: time="2025-09-12T17:42:42.499454283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:42:42.499517 containerd[1641]: time="2025-09-12T17:42:42.499493662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:42:42.499517 containerd[1641]: time="2025-09-12T17:42:42.499503843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:42.500261 containerd[1641]: time="2025-09-12T17:42:42.499567993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:42:42.537492 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:42:42.566546 containerd[1641]: time="2025-09-12T17:42:42.566494166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-drjps,Uid:4c40b048-393b-4ff5-8a75-2bc642369f86,Namespace:calico-system,Attempt:1,} returns sandbox id \"6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc\"" Sep 12 17:42:42.642467 kubelet[2895]: I0912 17:42:42.642426 2895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:42.907760 systemd-networkd[1285]: cali8a1f14dd3cd: Gained IPv6LL Sep 12 17:42:43.099776 systemd-networkd[1285]: cali1e02ad146c9: Gained IPv6LL Sep 12 17:42:43.247887 kubelet[2895]: I0912 17:42:43.247761 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dfb6749f-f67rq" podStartSLOduration=29.210183169 podStartE2EDuration="32.247748111s" podCreationTimestamp="2025-09-12 17:42:11 +0000 UTC" firstStartedPulling="2025-09-12 17:42:38.418199227 +0000 UTC m=+40.368878260" lastFinishedPulling="2025-09-12 17:42:41.455764172 +0000 UTC m=+43.406443202" observedRunningTime="2025-09-12 17:42:42.678469109 +0000 UTC m=+44.629148148" watchObservedRunningTime="2025-09-12 17:42:43.247748111 +0000 UTC m=+45.198427143" Sep 12 17:42:43.343729 containerd[1641]: time="2025-09-12T17:42:43.343705540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:43.344360 containerd[1641]: time="2025-09-12T17:42:43.344091329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:42:43.344796 containerd[1641]: time="2025-09-12T17:42:43.344772124Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:43.346995 containerd[1641]: time="2025-09-12T17:42:43.346165983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.497479101s" Sep 12 17:42:43.346995 containerd[1641]: time="2025-09-12T17:42:43.346184985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:42:43.350059 containerd[1641]: time="2025-09-12T17:42:43.349125922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:42:43.357355 containerd[1641]: time="2025-09-12T17:42:43.357042397Z" level=info msg="CreateContainer within sandbox \"94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:42:43.373363 containerd[1641]: time="2025-09-12T17:42:43.373343228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:43.384721 containerd[1641]: time="2025-09-12T17:42:43.384694942Z" level=info msg="CreateContainer within sandbox \"94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9c92fc5d52b73f29ad7783713297d84fbe281a9333cca2bf0c87bb29505281e0\"" Sep 12 17:42:43.385103 containerd[1641]: time="2025-09-12T17:42:43.385085015Z" level=info msg="StartContainer for \"9c92fc5d52b73f29ad7783713297d84fbe281a9333cca2bf0c87bb29505281e0\"" Sep 12 17:42:43.454726 containerd[1641]: time="2025-09-12T17:42:43.454699896Z" level=info msg="StartContainer for \"9c92fc5d52b73f29ad7783713297d84fbe281a9333cca2bf0c87bb29505281e0\" returns successfully" Sep 12 17:42:43.483831 systemd-networkd[1285]: calia95477b9d3c: Gained IPv6LL Sep 12 17:42:46.061186 containerd[1641]: time="2025-09-12T17:42:46.061146344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:46.083009 containerd[1641]: time="2025-09-12T17:42:46.082678039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:42:46.095637 containerd[1641]: time="2025-09-12T17:42:46.095340521Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:46.122778 containerd[1641]: time="2025-09-12T17:42:46.122543641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:46.123256 containerd[1641]: time="2025-09-12T17:42:46.122889089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.773743031s" Sep 12 17:42:46.123256 containerd[1641]: time="2025-09-12T17:42:46.122907914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:42:46.320309 containerd[1641]: time="2025-09-12T17:42:46.320221558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:42:46.363806 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:46.387184 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:46.363829 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:46.411979 containerd[1641]: time="2025-09-12T17:42:46.411946956Z" level=info msg="CreateContainer within sandbox \"66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:42:46.450249 containerd[1641]: time="2025-09-12T17:42:46.450222167Z" level=info msg="CreateContainer within sandbox \"66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a1623582738cb046d87a84138b42c2b0a6ebec35ee504d16e02a58b413cb6eb6\"" Sep 12 17:42:46.450772 containerd[1641]: time="2025-09-12T17:42:46.450723364Z" level=info msg="StartContainer for \"a1623582738cb046d87a84138b42c2b0a6ebec35ee504d16e02a58b413cb6eb6\"" Sep 12 17:42:46.616971 containerd[1641]: time="2025-09-12T17:42:46.616852508Z" level=info msg="StartContainer for \"a1623582738cb046d87a84138b42c2b0a6ebec35ee504d16e02a58b413cb6eb6\" returns successfully" Sep 12 17:42:47.426388 kubelet[2895]: I0912 17:42:47.420972 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c8d94b68-lfb95" podStartSLOduration=30.099779139 podStartE2EDuration="34.394792746s" podCreationTimestamp="2025-09-12 17:42:13 +0000 UTC" firstStartedPulling="2025-09-12 17:42:41.895721173 +0000 UTC m=+43.846400205" lastFinishedPulling="2025-09-12 17:42:46.190734781 +0000 UTC m=+48.141413812" observedRunningTime="2025-09-12 17:42:47.384497398 +0000 UTC m=+49.335176437" watchObservedRunningTime="2025-09-12 17:42:47.394792746 +0000 UTC m=+49.345471785" Sep 12 17:42:47.465545 systemd[1]: run-containerd-runc-k8s.io-a1623582738cb046d87a84138b42c2b0a6ebec35ee504d16e02a58b413cb6eb6-runc.KkTkv0.mount: Deactivated successfully. Sep 12 17:42:48.411913 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:48.421295 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:48.411934 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:48.447570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579554294.mount: Deactivated successfully. Sep 12 17:42:49.403796 containerd[1641]: time="2025-09-12T17:42:49.403771500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:49.406403 containerd[1641]: time="2025-09-12T17:42:49.405331466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:42:49.406403 containerd[1641]: time="2025-09-12T17:42:49.405703203Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:49.426257 containerd[1641]: time="2025-09-12T17:42:49.426230397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:49.440981 containerd[1641]: time="2025-09-12T17:42:49.440958214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.120706207s" Sep 12 17:42:49.441083 containerd[1641]: time="2025-09-12T17:42:49.441073691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:42:49.449550 containerd[1641]: time="2025-09-12T17:42:49.449529709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:42:49.485482 containerd[1641]: time="2025-09-12T17:42:49.485448256Z" level=info msg="CreateContainer within sandbox \"6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:42:49.517106 containerd[1641]: time="2025-09-12T17:42:49.517084259Z" level=info msg="CreateContainer within sandbox \"6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d2e1b79034d3266c367f2927833b3a52266478d4c4566983f5cb2fdf691448ca\"" Sep 12 17:42:49.527558 containerd[1641]: time="2025-09-12T17:42:49.527538283Z" level=info msg="StartContainer for \"d2e1b79034d3266c367f2927833b3a52266478d4c4566983f5cb2fdf691448ca\"" Sep 12 17:42:49.647026 containerd[1641]: time="2025-09-12T17:42:49.646983345Z" level=info msg="StartContainer for \"d2e1b79034d3266c367f2927833b3a52266478d4c4566983f5cb2fdf691448ca\" returns successfully" Sep 12 17:42:50.460795 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:50.460596 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:50.460612 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:50.644451 kubelet[2895]: I0912 17:42:50.644297 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-drjps" podStartSLOduration=31.757504349 podStartE2EDuration="38.632439149s" podCreationTimestamp="2025-09-12 17:42:12 +0000 UTC" firstStartedPulling="2025-09-12 17:42:42.567805392 +0000 UTC m=+44.518484421" lastFinishedPulling="2025-09-12 17:42:49.44274019 +0000 UTC m=+51.393419221" observedRunningTime="2025-09-12 17:42:50.451520011 +0000 UTC m=+52.402199045" watchObservedRunningTime="2025-09-12 17:42:50.632439149 +0000 UTC m=+52.583118193" Sep 12 17:42:51.241214 containerd[1641]: time="2025-09-12T17:42:51.240790410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:51.244602 containerd[1641]: time="2025-09-12T17:42:51.244575703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:42:51.257840 containerd[1641]: time="2025-09-12T17:42:51.257795583Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:51.264239 containerd[1641]: time="2025-09-12T17:42:51.264206868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:51.264887 containerd[1641]: time="2025-09-12T17:42:51.264776715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.811813198s" Sep 12 17:42:51.264887 containerd[1641]: time="2025-09-12T17:42:51.264806285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:42:51.277110 containerd[1641]: time="2025-09-12T17:42:51.277037883Z" level=info msg="CreateContainer within sandbox \"94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:42:51.307247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89561362.mount: Deactivated successfully. Sep 12 17:42:51.342187 containerd[1641]: time="2025-09-12T17:42:51.342140348Z" level=info msg="CreateContainer within sandbox \"94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1aa1d63af50d1beab87a3fac08ae1373efdc33c43d24894bf798985817cdb374\"" Sep 12 17:42:51.345219 containerd[1641]: time="2025-09-12T17:42:51.343417961Z" level=info msg="StartContainer for \"1aa1d63af50d1beab87a3fac08ae1373efdc33c43d24894bf798985817cdb374\"" Sep 12 17:42:51.489327 containerd[1641]: time="2025-09-12T17:42:51.489261459Z" level=info msg="StartContainer for \"1aa1d63af50d1beab87a3fac08ae1373efdc33c43d24894bf798985817cdb374\" returns successfully" Sep 12 17:42:52.511666 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:52.511926 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:52.511953 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:52.717267 kubelet[2895]: I0912 17:42:52.714853 2895 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:42:52.719160 kubelet[2895]: I0912 17:42:52.719140 2895 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:42:56.411859 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:56.413660 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:56.411866 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:58.317344 containerd[1641]: time="2025-09-12T17:42:58.317217878Z" level=info msg="StopPodSandbox for \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\"" Sep 12 17:42:58.460720 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:42:58.459824 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:42:58.459830 systemd-resolved[1540]: Flushed all caches. Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.074 [WARNING][5768] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--drjps-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"4c40b048-393b-4ff5-8a75-2bc642369f86", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc", Pod:"goldmane-7988f88666-drjps", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia95477b9d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.083 [INFO][5768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.083 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" iface="eth0" netns="" Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.083 [INFO][5768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.083 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.636 [INFO][5775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" HandleID="k8s-pod-network.b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.639 [INFO][5775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.640 [INFO][5775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.652 [WARNING][5775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" HandleID="k8s-pod-network.b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.652 [INFO][5775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" HandleID="k8s-pod-network.b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.654 [INFO][5775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:59.658737 containerd[1641]: 2025-09-12 17:42:59.655 [INFO][5768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:59.695810 containerd[1641]: time="2025-09-12T17:42:59.658813514Z" level=info msg="TearDown network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\" successfully" Sep 12 17:42:59.695810 containerd[1641]: time="2025-09-12T17:42:59.658831016Z" level=info msg="StopPodSandbox for \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\" returns successfully" Sep 12 17:42:59.800118 containerd[1641]: time="2025-09-12T17:42:59.799964896Z" level=info msg="RemovePodSandbox for \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\"" Sep 12 17:42:59.809303 containerd[1641]: time="2025-09-12T17:42:59.809122306Z" level=info msg="Forcibly stopping sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\"" Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.832 [WARNING][5790] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--drjps-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"4c40b048-393b-4ff5-8a75-2bc642369f86", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cddc80c51a28eee952baebb4ffeee611cc0ad4e8f5034bac4a21774ccc4c9bc", Pod:"goldmane-7988f88666-drjps", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia95477b9d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.832 [INFO][5790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.832 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" iface="eth0" netns="" Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.832 [INFO][5790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.832 [INFO][5790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.848 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" HandleID="k8s-pod-network.b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.848 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.848 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.851 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" HandleID="k8s-pod-network.b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.851 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" HandleID="k8s-pod-network.b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Workload="localhost-k8s-goldmane--7988f88666--drjps-eth0" Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.852 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:59.856387 containerd[1641]: 2025-09-12 17:42:59.854 [INFO][5790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb" Sep 12 17:42:59.869133 containerd[1641]: time="2025-09-12T17:42:59.856596035Z" level=info msg="TearDown network for sandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\" successfully" Sep 12 17:42:59.899326 containerd[1641]: time="2025-09-12T17:42:59.899291442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:42:59.904281 containerd[1641]: time="2025-09-12T17:42:59.904256320Z" level=info msg="RemovePodSandbox \"b35ee9b2a8faded9ee2677b6f360ed513e62151ee68e0ff9247f8c45ddea6deb\" returns successfully" Sep 12 17:42:59.907162 containerd[1641]: time="2025-09-12T17:42:59.907134809Z" level=info msg="StopPodSandbox for \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\"" Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.947 [WARNING][5811] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0", GenerateName:"calico-apiserver-6dfb6749f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b939991-56b9-412e-bee1-03c9df66c4a5", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfb6749f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055", Pod:"calico-apiserver-6dfb6749f-f67rq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d71fd4b0bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.947 [INFO][5811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.947 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" iface="eth0" netns="" Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.947 [INFO][5811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.947 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.961 [INFO][5818] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" HandleID="k8s-pod-network.a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.961 [INFO][5818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.961 [INFO][5818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.965 [WARNING][5818] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" HandleID="k8s-pod-network.a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.965 [INFO][5818] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" HandleID="k8s-pod-network.a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.974 [INFO][5818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:59.977600 containerd[1641]: 2025-09-12 17:42:59.976 [INFO][5811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:42:59.978359 containerd[1641]: time="2025-09-12T17:42:59.977703524Z" level=info msg="TearDown network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\" successfully" Sep 12 17:42:59.978359 containerd[1641]: time="2025-09-12T17:42:59.978115595Z" level=info msg="StopPodSandbox for \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\" returns successfully" Sep 12 17:42:59.979968 containerd[1641]: time="2025-09-12T17:42:59.978563966Z" level=info msg="RemovePodSandbox for \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\"" Sep 12 17:42:59.979968 containerd[1641]: time="2025-09-12T17:42:59.978587094Z" level=info msg="Forcibly stopping sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\"" Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.007 [WARNING][5832] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0", GenerateName:"calico-apiserver-6dfb6749f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b939991-56b9-412e-bee1-03c9df66c4a5", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfb6749f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38409e90624c9ab0ea222e92fce0b2a52a67667754b92866d62d473cc897a055", Pod:"calico-apiserver-6dfb6749f-f67rq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d71fd4b0bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.008 [INFO][5832] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.008 [INFO][5832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" iface="eth0" netns="" Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.008 [INFO][5832] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.008 [INFO][5832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.024 [INFO][5839] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" HandleID="k8s-pod-network.a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.024 [INFO][5839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.024 [INFO][5839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.036 [WARNING][5839] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" HandleID="k8s-pod-network.a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.036 [INFO][5839] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" HandleID="k8s-pod-network.a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Workload="localhost-k8s-calico--apiserver--6dfb6749f--f67rq-eth0" Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.037 [INFO][5839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.039675 containerd[1641]: 2025-09-12 17:43:00.038 [INFO][5832] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b" Sep 12 17:43:00.049105 containerd[1641]: time="2025-09-12T17:43:00.039716888Z" level=info msg="TearDown network for sandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\" successfully" Sep 12 17:43:00.072016 containerd[1641]: time="2025-09-12T17:43:00.071982329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:00.072118 containerd[1641]: time="2025-09-12T17:43:00.072050629Z" level=info msg="RemovePodSandbox \"a248023f199cf36d3e6379abdc058c2efa84ba46b5f8beba0b03d875c3f3948b\" returns successfully" Sep 12 17:43:00.072638 containerd[1641]: time="2025-09-12T17:43:00.072451298Z" level=info msg="StopPodSandbox for \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\"" Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.093 [WARNING][5853] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vqsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6c2d1714-1041-45d2-9888-c6e010910454", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061", Pod:"csi-node-driver-6vqsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e02ad146c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.093 [INFO][5853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.093 [INFO][5853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" iface="eth0" netns="" Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.093 [INFO][5853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.093 [INFO][5853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.107 [INFO][5860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" HandleID="k8s-pod-network.7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.107 [INFO][5860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.107 [INFO][5860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.111 [WARNING][5860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" HandleID="k8s-pod-network.7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.111 [INFO][5860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" HandleID="k8s-pod-network.7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.111 [INFO][5860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.114697 containerd[1641]: 2025-09-12 17:43:00.113 [INFO][5853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:43:00.114697 containerd[1641]: time="2025-09-12T17:43:00.114605874Z" level=info msg="TearDown network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\" successfully" Sep 12 17:43:00.114697 containerd[1641]: time="2025-09-12T17:43:00.114620703Z" level=info msg="StopPodSandbox for \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\" returns successfully" Sep 12 17:43:00.119540 containerd[1641]: time="2025-09-12T17:43:00.114971044Z" level=info msg="RemovePodSandbox for \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\"" Sep 12 17:43:00.119540 containerd[1641]: time="2025-09-12T17:43:00.114989106Z" level=info msg="Forcibly stopping sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\"" Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.139 [WARNING][5874] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vqsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6c2d1714-1041-45d2-9888-c6e010910454", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94dd150b5272d1758e03086a2f158474529679f8e5c185a27b7f4dd387873061", Pod:"csi-node-driver-6vqsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e02ad146c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.139 [INFO][5874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.139 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" iface="eth0" netns="" Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.139 [INFO][5874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.139 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.153 [INFO][5881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" HandleID="k8s-pod-network.7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.153 [INFO][5881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.153 [INFO][5881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.157 [WARNING][5881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" HandleID="k8s-pod-network.7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.157 [INFO][5881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" HandleID="k8s-pod-network.7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Workload="localhost-k8s-csi--node--driver--6vqsr-eth0" Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.158 [INFO][5881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.160629 containerd[1641]: 2025-09-12 17:43:00.159 [INFO][5874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d" Sep 12 17:43:00.161065 containerd[1641]: time="2025-09-12T17:43:00.160669361Z" level=info msg="TearDown network for sandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\" successfully" Sep 12 17:43:00.195641 containerd[1641]: time="2025-09-12T17:43:00.195618359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:00.195751 containerd[1641]: time="2025-09-12T17:43:00.195665184Z" level=info msg="RemovePodSandbox \"7b67f449a729ec2d052087f40913395567856a7b68990374299cf3ac06f5283d\" returns successfully" Sep 12 17:43:00.196003 containerd[1641]: time="2025-09-12T17:43:00.195828645Z" level=info msg="StopPodSandbox for \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\"" Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.224 [WARNING][5895] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0", GenerateName:"calico-apiserver-6dfb6749f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfb6749f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f", Pod:"calico-apiserver-6dfb6749f-dhd9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac77cadb0ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.224 [INFO][5895] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.224 [INFO][5895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" iface="eth0" netns="" Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.224 [INFO][5895] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.224 [INFO][5895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.248 [INFO][5902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" HandleID="k8s-pod-network.9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.248 [INFO][5902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.248 [INFO][5902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.252 [WARNING][5902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" HandleID="k8s-pod-network.9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.252 [INFO][5902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" HandleID="k8s-pod-network.9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.253 [INFO][5902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.255633 containerd[1641]: 2025-09-12 17:43:00.254 [INFO][5895] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:43:00.255633 containerd[1641]: time="2025-09-12T17:43:00.255609466Z" level=info msg="TearDown network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\" successfully" Sep 12 17:43:00.255633 containerd[1641]: time="2025-09-12T17:43:00.255632673Z" level=info msg="StopPodSandbox for \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\" returns successfully" Sep 12 17:43:00.262914 containerd[1641]: time="2025-09-12T17:43:00.262730070Z" level=info msg="RemovePodSandbox for \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\"" Sep 12 17:43:00.262914 containerd[1641]: time="2025-09-12T17:43:00.262750797Z" level=info msg="Forcibly stopping sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\"" Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.285 [WARNING][5917] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0", GenerateName:"calico-apiserver-6dfb6749f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c38b3b4-43aa-4a98-86c2-8699bc6b7fbe", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dfb6749f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a9254ab6406afa58e2c70b06ca0cef6f4366e0dd63608baf966d8a50d25b50f", Pod:"calico-apiserver-6dfb6749f-dhd9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac77cadb0ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.285 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.285 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" iface="eth0" netns="" Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.285 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.285 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.301 [INFO][5925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" HandleID="k8s-pod-network.9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.301 [INFO][5925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.301 [INFO][5925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.305 [WARNING][5925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" HandleID="k8s-pod-network.9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.305 [INFO][5925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" HandleID="k8s-pod-network.9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Workload="localhost-k8s-calico--apiserver--6dfb6749f--dhd9m-eth0" Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.309 [INFO][5925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.312351 containerd[1641]: 2025-09-12 17:43:00.311 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2" Sep 12 17:43:00.312700 containerd[1641]: time="2025-09-12T17:43:00.312405861Z" level=info msg="TearDown network for sandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\" successfully" Sep 12 17:43:00.313907 containerd[1641]: time="2025-09-12T17:43:00.313884551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:00.313942 containerd[1641]: time="2025-09-12T17:43:00.313928925Z" level=info msg="RemovePodSandbox \"9865216b5e388799410009a233cd11b2f2c5c8bbe1ebfdb3e267767bebf367b2\" returns successfully" Sep 12 17:43:00.314273 containerd[1641]: time="2025-09-12T17:43:00.314257436Z" level=info msg="StopPodSandbox for \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\"" Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.335 [WARNING][5939] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" WorkloadEndpoint="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.335 [INFO][5939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.335 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" iface="eth0" netns="" Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.335 [INFO][5939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.335 [INFO][5939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.349 [INFO][5946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" HandleID="k8s-pod-network.17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Workload="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.349 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.349 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.353 [WARNING][5946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" HandleID="k8s-pod-network.17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Workload="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.353 [INFO][5946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" HandleID="k8s-pod-network.17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Workload="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.354 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.356861 containerd[1641]: 2025-09-12 17:43:00.355 [INFO][5939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:43:00.357743 containerd[1641]: time="2025-09-12T17:43:00.356893647Z" level=info msg="TearDown network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\" successfully" Sep 12 17:43:00.357743 containerd[1641]: time="2025-09-12T17:43:00.356909672Z" level=info msg="StopPodSandbox for \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\" returns successfully" Sep 12 17:43:00.357743 containerd[1641]: time="2025-09-12T17:43:00.357210414Z" level=info msg="RemovePodSandbox for \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\"" Sep 12 17:43:00.357743 containerd[1641]: time="2025-09-12T17:43:00.357225438Z" level=info msg="Forcibly stopping sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\"" Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.387 [WARNING][5960] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" WorkloadEndpoint="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.387 [INFO][5960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.387 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" iface="eth0" netns="" Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.387 [INFO][5960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.388 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.402 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" HandleID="k8s-pod-network.17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Workload="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.402 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.402 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.408 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" HandleID="k8s-pod-network.17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Workload="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.408 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" HandleID="k8s-pod-network.17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Workload="localhost-k8s-whisker--b96f44689--f2d8j-eth0" Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.408 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.411822 containerd[1641]: 2025-09-12 17:43:00.410 [INFO][5960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245" Sep 12 17:43:00.412517 containerd[1641]: time="2025-09-12T17:43:00.411847571Z" level=info msg="TearDown network for sandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\" successfully" Sep 12 17:43:00.414696 containerd[1641]: time="2025-09-12T17:43:00.414677164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:00.414750 containerd[1641]: time="2025-09-12T17:43:00.414722004Z" level=info msg="RemovePodSandbox \"17366caa13f566b2e5e432de2e357301c8eea0bd5d2e2a3e4e23697c72aa7245\" returns successfully" Sep 12 17:43:00.415016 containerd[1641]: time="2025-09-12T17:43:00.415004033Z" level=info msg="StopPodSandbox for \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\"" Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.437 [WARNING][5982] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"66750a83-1e25-4052-acac-1ee5648a6796", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8", Pod:"coredns-7c65d6cfc9-2clqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff2e37bc47", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.438 [INFO][5982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.438 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" iface="eth0" netns="" Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.438 [INFO][5982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.438 [INFO][5982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.471 [INFO][5989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" HandleID="k8s-pod-network.892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.471 [INFO][5989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.471 [INFO][5989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.476 [WARNING][5989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" HandleID="k8s-pod-network.892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.476 [INFO][5989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" HandleID="k8s-pod-network.892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.477 [INFO][5989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.480438 containerd[1641]: 2025-09-12 17:43:00.479 [INFO][5982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:43:00.480946 containerd[1641]: time="2025-09-12T17:43:00.480476276Z" level=info msg="TearDown network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\" successfully" Sep 12 17:43:00.480946 containerd[1641]: time="2025-09-12T17:43:00.480491952Z" level=info msg="StopPodSandbox for \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\" returns successfully" Sep 12 17:43:00.480946 containerd[1641]: time="2025-09-12T17:43:00.480823875Z" level=info msg="RemovePodSandbox for \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\"" Sep 12 17:43:00.480946 containerd[1641]: time="2025-09-12T17:43:00.480841834Z" level=info msg="Forcibly stopping sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\"" Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.501 [WARNING][6003] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"66750a83-1e25-4052-acac-1ee5648a6796", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3553c46c74992c31042686032815277c4294b9b47054f007fabad970ab77d1d8", Pod:"coredns-7c65d6cfc9-2clqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff2e37bc47", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.502 [INFO][6003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.502 [INFO][6003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" iface="eth0" netns="" Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.502 [INFO][6003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.502 [INFO][6003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.515 [INFO][6010] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" HandleID="k8s-pod-network.892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.515 [INFO][6010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.515 [INFO][6010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.518 [WARNING][6010] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" HandleID="k8s-pod-network.892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.518 [INFO][6010] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" HandleID="k8s-pod-network.892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Workload="localhost-k8s-coredns--7c65d6cfc9--2clqw-eth0" Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.519 [INFO][6010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.522464 containerd[1641]: 2025-09-12 17:43:00.520 [INFO][6003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208" Sep 12 17:43:00.522464 containerd[1641]: time="2025-09-12T17:43:00.522317104Z" level=info msg="TearDown network for sandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\" successfully" Sep 12 17:43:00.551434 containerd[1641]: time="2025-09-12T17:43:00.551411515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:00.551489 containerd[1641]: time="2025-09-12T17:43:00.551456658Z" level=info msg="RemovePodSandbox \"892c252070ff903f3f2ade8268bfc2e2d6821a4d0a39fb7a0875d8dc95b5d208\" returns successfully" Sep 12 17:43:00.551842 containerd[1641]: time="2025-09-12T17:43:00.551827554Z" level=info msg="StopPodSandbox for \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\"" Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.577 [WARNING][6024] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0", GenerateName:"calico-kube-controllers-c8d94b68-", Namespace:"calico-system", SelfLink:"", UID:"24866c61-e05d-457b-a1e7-0f1c845f8a0f", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8d94b68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427", Pod:"calico-kube-controllers-c8d94b68-lfb95", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a1f14dd3cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.577 [INFO][6024] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.577 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" iface="eth0" netns="" Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.577 [INFO][6024] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.577 [INFO][6024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.590 [INFO][6032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" HandleID="k8s-pod-network.8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.590 [INFO][6032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.591 [INFO][6032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.610 [WARNING][6032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" HandleID="k8s-pod-network.8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.610 [INFO][6032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" HandleID="k8s-pod-network.8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.610 [INFO][6032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.613575 containerd[1641]: 2025-09-12 17:43:00.612 [INFO][6024] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:43:00.617324 containerd[1641]: time="2025-09-12T17:43:00.617297514Z" level=info msg="TearDown network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\" successfully" Sep 12 17:43:00.623722 containerd[1641]: time="2025-09-12T17:43:00.617327068Z" level=info msg="StopPodSandbox for \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\" returns successfully" Sep 12 17:43:00.623722 containerd[1641]: time="2025-09-12T17:43:00.617713204Z" level=info msg="RemovePodSandbox for \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\"" Sep 12 17:43:00.623722 containerd[1641]: time="2025-09-12T17:43:00.617731088Z" level=info msg="Forcibly stopping sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\"" Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.664 [WARNING][6046] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0", GenerateName:"calico-kube-controllers-c8d94b68-", Namespace:"calico-system", SelfLink:"", UID:"24866c61-e05d-457b-a1e7-0f1c845f8a0f", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8d94b68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66095b81663efd20359d6a7edfbfa14360063e1483c7c9047946ee67cb814427", Pod:"calico-kube-controllers-c8d94b68-lfb95", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a1f14dd3cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.665 [INFO][6046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.665 [INFO][6046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" iface="eth0" netns="" Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.665 [INFO][6046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.665 [INFO][6046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.680 [INFO][6053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" HandleID="k8s-pod-network.8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.680 [INFO][6053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.680 [INFO][6053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.684 [WARNING][6053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" HandleID="k8s-pod-network.8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.684 [INFO][6053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" HandleID="k8s-pod-network.8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Workload="localhost-k8s-calico--kube--controllers--c8d94b68--lfb95-eth0" Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.685 [INFO][6053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.690864 containerd[1641]: 2025-09-12 17:43:00.688 [INFO][6046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98" Sep 12 17:43:00.691548 containerd[1641]: time="2025-09-12T17:43:00.690900823Z" level=info msg="TearDown network for sandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\" successfully" Sep 12 17:43:00.693724 containerd[1641]: time="2025-09-12T17:43:00.693706145Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:00.693776 containerd[1641]: time="2025-09-12T17:43:00.693744272Z" level=info msg="RemovePodSandbox \"8896ed386f04a1420f8ff87b1dc6e0b281691608d4752d37aa9c1fa234d7cd98\" returns successfully" Sep 12 17:43:00.694405 containerd[1641]: time="2025-09-12T17:43:00.694186170Z" level=info msg="StopPodSandbox for \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\"" Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.717 [WARNING][6069] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"21242ab1-1a34-4925-bcf3-2c7f923b75c1", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2", Pod:"coredns-7c65d6cfc9-7mxql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61c8a5c4283", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.718 [INFO][6069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.718 [INFO][6069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" iface="eth0" netns="" Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.718 [INFO][6069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.718 [INFO][6069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.732 [INFO][6076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" HandleID="k8s-pod-network.341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.732 [INFO][6076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.732 [INFO][6076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.740 [WARNING][6076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" HandleID="k8s-pod-network.341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.740 [INFO][6076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" HandleID="k8s-pod-network.341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.741 [INFO][6076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.744477 containerd[1641]: 2025-09-12 17:43:00.743 [INFO][6069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:43:00.745334 containerd[1641]: time="2025-09-12T17:43:00.744504640Z" level=info msg="TearDown network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\" successfully" Sep 12 17:43:00.745334 containerd[1641]: time="2025-09-12T17:43:00.744520166Z" level=info msg="StopPodSandbox for \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\" returns successfully" Sep 12 17:43:00.745334 containerd[1641]: time="2025-09-12T17:43:00.744859325Z" level=info msg="RemovePodSandbox for \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\"" Sep 12 17:43:00.745334 containerd[1641]: time="2025-09-12T17:43:00.744880996Z" level=info msg="Forcibly stopping sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\"" Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.770 [WARNING][6094] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"21242ab1-1a34-4925-bcf3-2c7f923b75c1", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 42, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3eda09438c01fd518c026300ba3d3ba3adbd81723fe9dc8f52c33aba8f75d5d2", Pod:"coredns-7c65d6cfc9-7mxql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61c8a5c4283", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.770 [INFO][6094] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.770 [INFO][6094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" iface="eth0" netns="" Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.770 [INFO][6094] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.770 [INFO][6094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.788 [INFO][6101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" HandleID="k8s-pod-network.341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.789 [INFO][6101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.789 [INFO][6101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.795 [WARNING][6101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" HandleID="k8s-pod-network.341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.795 [INFO][6101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" HandleID="k8s-pod-network.341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Workload="localhost-k8s-coredns--7c65d6cfc9--7mxql-eth0" Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.797 [INFO][6101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:43:00.802070 containerd[1641]: 2025-09-12 17:43:00.800 [INFO][6094] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8" Sep 12 17:43:00.802070 containerd[1641]: time="2025-09-12T17:43:00.802039233Z" level=info msg="TearDown network for sandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\" successfully" Sep 12 17:43:00.811202 containerd[1641]: time="2025-09-12T17:43:00.811169822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:43:00.811413 containerd[1641]: time="2025-09-12T17:43:00.811221251Z" level=info msg="RemovePodSandbox \"341c6775614a2a2963faa5128fac2e0910b0f6bf142a3b5e60870d3a018a83b8\" returns successfully" Sep 12 17:43:04.884750 kubelet[2895]: I0912 17:43:04.863457 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6vqsr" podStartSLOduration=42.393721308 podStartE2EDuration="51.818398576s" podCreationTimestamp="2025-09-12 17:42:13 +0000 UTC" firstStartedPulling="2025-09-12 17:42:41.840801157 +0000 UTC m=+43.791480187" lastFinishedPulling="2025-09-12 17:42:51.265478417 +0000 UTC m=+53.216157455" observedRunningTime="2025-09-12 17:42:52.475345732 +0000 UTC m=+54.426024770" watchObservedRunningTime="2025-09-12 17:43:04.818398576 +0000 UTC m=+66.769077609" Sep 12 17:43:08.446671 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:08.460085 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:08.460093 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:15.858166 systemd[1]: Started sshd@8-139.178.70.102:22-139.178.89.65:37364.service - OpenSSH per-connection server daemon (139.178.89.65:37364). Sep 12 17:43:16.036618 systemd[1]: run-containerd-runc-k8s.io-a1623582738cb046d87a84138b42c2b0a6ebec35ee504d16e02a58b413cb6eb6-runc.TMkRrA.mount: Deactivated successfully. Sep 12 17:43:16.217905 sshd[6192]: Accepted publickey for core from 139.178.89.65 port 37364 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:16.222032 sshd[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:16.252524 systemd-logind[1619]: New session 10 of user core. Sep 12 17:43:16.259863 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:43:17.366938 sshd[6192]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:17.377490 systemd[1]: sshd@8-139.178.70.102:22-139.178.89.65:37364.service: Deactivated successfully. Sep 12 17:43:17.379979 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:43:17.380111 systemd-logind[1619]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:43:17.381738 systemd-logind[1619]: Removed session 10. Sep 12 17:43:17.996030 kubelet[2895]: I0912 17:43:17.995681 2895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:43:18.433516 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:18.429615 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:18.429621 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:20.475830 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:20.476806 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:20.475836 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:22.418844 systemd[1]: Started sshd@9-139.178.70.102:22-139.178.89.65:49282.service - OpenSSH per-connection server daemon (139.178.89.65:49282). Sep 12 17:43:22.652588 sshd[6235]: Accepted publickey for core from 139.178.89.65 port 49282 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:22.654359 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:22.657926 systemd-logind[1619]: New session 11 of user core. Sep 12 17:43:22.659945 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:43:23.569356 sshd[6235]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:23.573461 systemd-logind[1619]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:43:23.576282 systemd[1]: sshd@9-139.178.70.102:22-139.178.89.65:49282.service: Deactivated successfully. Sep 12 17:43:23.589913 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:43:23.590448 systemd-logind[1619]: Removed session 11. Sep 12 17:43:26.427800 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:26.427805 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:26.428721 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:28.581773 systemd[1]: Started sshd@10-139.178.70.102:22-139.178.89.65:49290.service - OpenSSH per-connection server daemon (139.178.89.65:49290). Sep 12 17:43:28.742345 sshd[6271]: Accepted publickey for core from 139.178.89.65 port 49290 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:28.744396 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:28.749714 systemd-logind[1619]: New session 12 of user core. Sep 12 17:43:28.755852 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:43:29.345733 sshd[6271]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:29.350861 systemd[1]: sshd@10-139.178.70.102:22-139.178.89.65:49290.service: Deactivated successfully. Sep 12 17:43:29.359826 systemd-logind[1619]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:43:29.365897 systemd[1]: Started sshd@11-139.178.70.102:22-139.178.89.65:49296.service - OpenSSH per-connection server daemon (139.178.89.65:49296). Sep 12 17:43:29.366145 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:43:29.369768 systemd-logind[1619]: Removed session 12. Sep 12 17:43:29.414535 sshd[6286]: Accepted publickey for core from 139.178.89.65 port 49296 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:29.414909 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:29.418508 systemd-logind[1619]: New session 13 of user core. Sep 12 17:43:29.422933 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:43:29.675260 sshd[6286]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:29.678322 systemd[1]: sshd@11-139.178.70.102:22-139.178.89.65:49296.service: Deactivated successfully. Sep 12 17:43:29.684115 systemd-logind[1619]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:43:29.687081 systemd[1]: Started sshd@12-139.178.70.102:22-139.178.89.65:49298.service - OpenSSH per-connection server daemon (139.178.89.65:49298). Sep 12 17:43:29.687600 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:43:29.689217 systemd-logind[1619]: Removed session 13. Sep 12 17:43:29.764145 sshd[6298]: Accepted publickey for core from 139.178.89.65 port 49298 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:29.765031 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:29.769028 systemd-logind[1619]: New session 14 of user core. Sep 12 17:43:29.773811 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:43:30.071073 sshd[6298]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:30.072825 systemd-logind[1619]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:43:30.072989 systemd[1]: sshd@12-139.178.70.102:22-139.178.89.65:49298.service: Deactivated successfully. Sep 12 17:43:30.074980 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:43:30.075414 systemd-logind[1619]: Removed session 14. Sep 12 17:43:30.396084 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:30.397904 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:30.396088 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:32.444694 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:32.443890 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:32.443895 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:35.086887 systemd[1]: Started sshd@13-139.178.70.102:22-139.178.89.65:54298.service - OpenSSH per-connection server daemon (139.178.89.65:54298). Sep 12 17:43:35.194454 sshd[6359]: Accepted publickey for core from 139.178.89.65 port 54298 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:35.196090 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:35.199744 systemd-logind[1619]: New session 15 of user core. Sep 12 17:43:35.204892 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:43:36.176917 sshd[6359]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:36.195364 systemd[1]: sshd@13-139.178.70.102:22-139.178.89.65:54298.service: Deactivated successfully. Sep 12 17:43:36.199045 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:43:36.199789 systemd-logind[1619]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:43:36.201158 systemd-logind[1619]: Removed session 15. Sep 12 17:43:36.413605 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:36.412493 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:36.412502 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:38.460666 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:38.460702 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:38.460707 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:41.197878 systemd[1]: Started sshd@14-139.178.70.102:22-139.178.89.65:60994.service - OpenSSH per-connection server daemon (139.178.89.65:60994). Sep 12 17:43:41.384712 sshd[6399]: Accepted publickey for core from 139.178.89.65 port 60994 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:41.386669 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:41.395790 systemd-logind[1619]: New session 16 of user core. Sep 12 17:43:41.399881 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:43:42.156799 sshd[6399]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:42.166989 systemd[1]: sshd@14-139.178.70.102:22-139.178.89.65:60994.service: Deactivated successfully. Sep 12 17:43:42.171584 systemd-logind[1619]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:43:42.179919 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:43:42.181523 systemd-logind[1619]: Removed session 16. Sep 12 17:43:42.428910 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:42.428916 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:42.429677 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:44.475955 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:44.476852 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:44.475961 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:46.524517 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:46.524521 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:46.534729 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:47.168946 systemd[1]: Started sshd@15-139.178.70.102:22-139.178.89.65:60996.service - OpenSSH per-connection server daemon (139.178.89.65:60996). Sep 12 17:43:47.443853 sshd[6433]: Accepted publickey for core from 139.178.89.65 port 60996 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:47.459029 sshd[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:47.469884 systemd-logind[1619]: New session 17 of user core. Sep 12 17:43:47.473846 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:43:48.571828 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:48.572823 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:48.571835 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:50.228683 sshd[6433]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:50.251864 systemd[1]: sshd@15-139.178.70.102:22-139.178.89.65:60996.service: Deactivated successfully. Sep 12 17:43:50.257263 systemd-logind[1619]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:43:50.270946 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:43:50.271330 systemd-logind[1619]: Removed session 17. Sep 12 17:43:50.621752 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:50.619940 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:50.619944 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:52.667807 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:52.667812 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:52.668676 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:55.239801 systemd[1]: Started sshd@16-139.178.70.102:22-139.178.89.65:34910.service - OpenSSH per-connection server daemon (139.178.89.65:34910). Sep 12 17:43:55.360012 sshd[6467]: Accepted publickey for core from 139.178.89.65 port 34910 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:55.361217 sshd[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:55.365098 systemd-logind[1619]: New session 18 of user core. Sep 12 17:43:55.369042 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:43:55.914403 sshd[6467]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:55.919968 systemd[1]: Started sshd@17-139.178.70.102:22-139.178.89.65:34920.service - OpenSSH per-connection server daemon (139.178.89.65:34920). Sep 12 17:43:55.920480 systemd[1]: sshd@16-139.178.70.102:22-139.178.89.65:34910.service: Deactivated successfully. Sep 12 17:43:55.922870 systemd-logind[1619]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:43:55.923840 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:43:55.924910 systemd-logind[1619]: Removed session 18. Sep 12 17:43:55.949156 sshd[6478]: Accepted publickey for core from 139.178.89.65 port 34920 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:55.950289 sshd[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:55.953079 systemd-logind[1619]: New session 19 of user core. Sep 12 17:43:55.959968 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:43:56.413605 sshd[6478]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:56.421790 systemd[1]: Started sshd@18-139.178.70.102:22-139.178.89.65:34926.service - OpenSSH per-connection server daemon (139.178.89.65:34926). Sep 12 17:43:56.422485 systemd[1]: sshd@17-139.178.70.102:22-139.178.89.65:34920.service: Deactivated successfully. Sep 12 17:43:56.426291 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:43:56.428022 systemd-logind[1619]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:43:56.431416 systemd-logind[1619]: Removed session 19. Sep 12 17:43:56.461641 sshd[6490]: Accepted publickey for core from 139.178.89.65 port 34926 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:43:56.462601 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:56.465176 systemd-logind[1619]: New session 20 of user core. Sep 12 17:43:56.471831 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:43:58.429260 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:43:58.464250 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:43:58.429270 systemd-resolved[1540]: Flushed all caches. Sep 12 17:43:59.837247 sshd[6490]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:59.872588 systemd[1]: Started sshd@19-139.178.70.102:22-139.178.89.65:34928.service - OpenSSH per-connection server daemon (139.178.89.65:34928). Sep 12 17:43:59.875581 systemd[1]: sshd@18-139.178.70.102:22-139.178.89.65:34926.service: Deactivated successfully. Sep 12 17:43:59.878466 systemd-logind[1619]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:43:59.879461 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:43:59.880194 systemd-logind[1619]: Removed session 20. Sep 12 17:44:00.165198 sshd[6508]: Accepted publickey for core from 139.178.89.65 port 34928 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:44:00.166461 sshd[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:00.183693 systemd-logind[1619]: New session 21 of user core. Sep 12 17:44:00.187799 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:44:00.477806 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:00.497304 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:00.497313 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:02.523893 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:02.621936 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:02.544175 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:04.572867 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:04.586224 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:04.586234 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:06.643734 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:06.637536 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:06.637552 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:08.673953 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:08.674524 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:08.674535 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:10.750166 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:10.733201 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:10.733207 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:12.786341 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:12.776067 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:12.776075 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:14.335225 kubelet[2895]: E0912 17:44:13.539361 2895 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.47s" Sep 12 17:44:14.814436 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:14.813708 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:14.813713 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:16.874124 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:17.027022 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:16.882966 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:17.210314 kubelet[2895]: E0912 17:44:16.847181 2895 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.727s" Sep 12 17:44:18.982856 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:18.979870 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:18.979881 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:20.510527 sshd[6508]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:20.521840 systemd[1]: Started sshd@20-139.178.70.102:22-139.178.89.65:56630.service - OpenSSH per-connection server daemon (139.178.89.65:56630). Sep 12 17:44:20.592521 systemd[1]: sshd@19-139.178.70.102:22-139.178.89.65:34928.service: Deactivated successfully. Sep 12 17:44:20.594982 systemd-logind[1619]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:44:20.596360 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:44:20.596864 systemd-logind[1619]: Removed session 21. Sep 12 17:44:20.957694 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:20.971203 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:20.971212 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:21.829163 sshd[6636]: Accepted publickey for core from 139.178.89.65 port 56630 ssh2: RSA SHA256:1UvyCBiEUqP33smigF817CNqNFRni2AjyTa15okTl5o Sep 12 17:44:21.838354 sshd[6636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:21.866983 systemd-logind[1619]: New session 22 of user core. Sep 12 17:44:21.874094 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:44:23.009847 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:23.008994 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:23.009014 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:23.432908 kubelet[2895]: E0912 17:44:23.432804 2895 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.107s" Sep 12 17:44:25.077327 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:25.077107 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:25.077120 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:27.108623 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:27.108509 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:27.108518 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:29.181430 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:29.311321 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:29.181439 systemd-resolved[1540]: Flushed all caches. Sep 12 17:44:29.359739 kubelet[2895]: E0912 17:44:29.139443 2895 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.748s" Sep 12 17:44:31.049418 sshd[6636]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:31.133539 systemd[1]: sshd@20-139.178.70.102:22-139.178.89.65:56630.service: Deactivated successfully. Sep 12 17:44:31.137416 systemd-logind[1619]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:44:31.197739 kubelet[2895]: E0912 17:44:30.998945 2895 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.217s" Sep 12 17:44:31.138267 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:44:31.202685 systemd-journald[1200]: Under memory pressure, flushing caches. Sep 12 17:44:31.158456 systemd-logind[1619]: Removed session 22. Sep 12 17:44:31.198921 systemd-resolved[1540]: Under memory pressure, flushing caches. Sep 12 17:44:31.198927 systemd-resolved[1540]: Flushed all caches.