Sep 9 00:45:25.755776 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:41:17 -00 2025 Sep 9 00:45:25.755793 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:45:25.755799 kernel: Disabled fast string operations Sep 9 00:45:25.755803 kernel: BIOS-provided physical RAM map: Sep 9 00:45:25.755807 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 9 00:45:25.755811 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 9 00:45:25.755817 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 9 00:45:25.755822 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 9 00:45:25.755826 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 9 00:45:25.755830 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 9 00:45:25.755834 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 9 00:45:25.755838 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 9 00:45:25.755843 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 9 00:45:25.755847 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 9 00:45:25.755853 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 9 00:45:25.755858 kernel: NX (Execute Disable) protection: active Sep 9 00:45:25.755863 kernel: APIC: Static calls initialized Sep 9 00:45:25.755868 kernel: SMBIOS 2.7 present. Sep 9 00:45:25.755873 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 9 00:45:25.755877 kernel: vmware: hypercall mode: 0x00 Sep 9 00:45:25.755882 kernel: Hypervisor detected: VMware Sep 9 00:45:25.755887 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 9 00:45:25.755893 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 9 00:45:25.755897 kernel: vmware: using clock offset of 5475569267 ns Sep 9 00:45:25.755902 kernel: tsc: Detected 3408.000 MHz processor Sep 9 00:45:25.755907 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:45:25.755912 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:45:25.755917 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 9 00:45:25.755922 kernel: total RAM covered: 3072M Sep 9 00:45:25.755927 kernel: Found optimal setting for mtrr clean up Sep 9 00:45:25.755939 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 9 00:45:25.755947 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 9 00:45:25.755952 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:45:25.755957 kernel: Using GB pages for direct mapping Sep 9 00:45:25.755962 kernel: ACPI: Early table checksum verification disabled Sep 9 00:45:25.755967 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 9 00:45:25.755972 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 9 00:45:25.755977 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 9 00:45:25.755982 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 9 00:45:25.755987 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 00:45:25.755994 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 00:45:25.756000 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 9 00:45:25.756005 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 9 00:45:25.756011 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 9 00:45:25.756016 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 9 00:45:25.756022 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 9 00:45:25.756027 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 9 00:45:25.756033 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 9 00:45:25.756038 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 9 00:45:25.756051 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 00:45:25.756056 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 00:45:25.756061 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 9 00:45:25.756067 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 9 00:45:25.756072 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 9 00:45:25.756077 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 9 00:45:25.756084 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 9 00:45:25.756089 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 9 00:45:25.756094 kernel: system APIC only can use physical flat Sep 9 00:45:25.756099 kernel: APIC: Switched APIC routing to: physical flat Sep 9 00:45:25.756104 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 9 00:45:25.756109 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 9 00:45:25.756114 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 9 00:45:25.756119 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 9 00:45:25.756124 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 9 00:45:25.756131 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 9 00:45:25.756136 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 9 00:45:25.756141 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 9 00:45:25.756146 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Sep 9 00:45:25.756151 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Sep 9 00:45:25.756156 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Sep 9 00:45:25.756161 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Sep 9 00:45:25.756166 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Sep 9 00:45:25.756171 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Sep 9 00:45:25.756176 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Sep 9 00:45:25.756181 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Sep 9 00:45:25.756187 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Sep 9 00:45:25.756192 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Sep 9 00:45:25.756198 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Sep 9 00:45:25.756202 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Sep 9 00:45:25.756207 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Sep 9 00:45:25.756213 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Sep 9 00:45:25.756218 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Sep 9 00:45:25.756223 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Sep 9 00:45:25.756228 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Sep 9 00:45:25.756233 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Sep 9 00:45:25.756239 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Sep 9 00:45:25.756244 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Sep 9 00:45:25.756249 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Sep 9 00:45:25.756254 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Sep 9 00:45:25.756259 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Sep 9 00:45:25.756264 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Sep 9 00:45:25.756269 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Sep 9 00:45:25.756274 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Sep 9 00:45:25.756279 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Sep 9 00:45:25.756285 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Sep 9 00:45:25.756291 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Sep 9 00:45:25.756296 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Sep 9 00:45:25.756301 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Sep 9 00:45:25.756306 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Sep 9 00:45:25.756311 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Sep 9 00:45:25.756316 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Sep 9 00:45:25.756321 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Sep 9 00:45:25.756326 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Sep 9 00:45:25.756331 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Sep 9 00:45:25.756336 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Sep 9 00:45:25.756342 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Sep 9 00:45:25.756347 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Sep 9 00:45:25.756352 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Sep 9 00:45:25.756357 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Sep 9 00:45:25.756362 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Sep 9 00:45:25.756368 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Sep 9 00:45:25.756372 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Sep 9 00:45:25.756378 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Sep 9 00:45:25.756382 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Sep 9 00:45:25.756388 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Sep 9 00:45:25.756394 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Sep 9 00:45:25.756399 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Sep 9 00:45:25.756404 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Sep 9 00:45:25.756414 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Sep 9 00:45:25.756419 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Sep 9 00:45:25.756425 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Sep 9 00:45:25.756430 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Sep 9 00:45:25.756436 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Sep 9 00:45:25.756441 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Sep 9 00:45:25.756448 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Sep 9 00:45:25.756453 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Sep 9 00:45:25.756459 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Sep 9 00:45:25.756464 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Sep 9 00:45:25.756470 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Sep 9 00:45:25.756475 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Sep 9 00:45:25.756481 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Sep 9 00:45:25.756486 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Sep 9 00:45:25.756491 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Sep 9 00:45:25.756496 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Sep 9 00:45:25.756503 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Sep 9 00:45:25.756509 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Sep 9 00:45:25.756514 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Sep 9 00:45:25.756519 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Sep 9 00:45:25.756524 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Sep 9 00:45:25.756530 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Sep 9 00:45:25.756535 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Sep 9 00:45:25.756541 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Sep 9 00:45:25.756546 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Sep 9 00:45:25.756551 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Sep 9 00:45:25.756558 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Sep 9 00:45:25.756563 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Sep 9 00:45:25.756569 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Sep 9 00:45:25.756574 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Sep 9 00:45:25.756579 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Sep 9 00:45:25.756585 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Sep 9 00:45:25.756590 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Sep 9 00:45:25.756596 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Sep 9 00:45:25.756601 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Sep 9 00:45:25.756606 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Sep 9 00:45:25.756613 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Sep 9 00:45:25.756618 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Sep 9 00:45:25.756624 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Sep 9 00:45:25.756629 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Sep 9 00:45:25.756634 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Sep 9 00:45:25.756639 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Sep 9 00:45:25.756645 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Sep 9 00:45:25.756650 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Sep 9 00:45:25.756655 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Sep 9 00:45:25.756661 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Sep 9 00:45:25.756667 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Sep 9 00:45:25.756673 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Sep 9 00:45:25.756678 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Sep 9 00:45:25.756684 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Sep 9 00:45:25.756689 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Sep 9 00:45:25.756694 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Sep 9 00:45:25.756700 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Sep 9 00:45:25.756705 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Sep 9 00:45:25.756710 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Sep 9 00:45:25.756716 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Sep 9 00:45:25.756722 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Sep 9 00:45:25.756728 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Sep 9 00:45:25.756733 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Sep 9 00:45:25.756739 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Sep 9 00:45:25.756744 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Sep 9 00:45:25.756749 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Sep 9 00:45:25.756755 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Sep 9 00:45:25.756760 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Sep 9 00:45:25.756765 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Sep 9 00:45:25.756771 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Sep 9 00:45:25.756776 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Sep 9 00:45:25.756782 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Sep 9 00:45:25.756788 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Sep 9 00:45:25.756793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 00:45:25.756799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 00:45:25.756804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 9 00:45:25.756810 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Sep 9 00:45:25.756816 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Sep 9 00:45:25.756821 kernel: Zone ranges: Sep 9 00:45:25.756827 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:45:25.756833 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 9 00:45:25.756839 kernel: Normal empty Sep 9 00:45:25.756845 kernel: Movable zone start for each node Sep 9 00:45:25.756850 kernel: Early memory node ranges Sep 9 00:45:25.756856 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 9 00:45:25.756861 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 9 00:45:25.756867 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 9 00:45:25.756872 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 9 00:45:25.756878 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:45:25.756883 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 9 00:45:25.756890 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 9 00:45:25.756896 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 9 00:45:25.756901 kernel: system APIC only can use physical flat Sep 9 00:45:25.756906 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 9 00:45:25.756912 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 9 00:45:25.756918 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 9 00:45:25.756923 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 9 00:45:25.756928 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 9 00:45:25.756949 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 9 00:45:25.756958 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 9 00:45:25.756963 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 9 00:45:25.756969 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 9 00:45:25.756974 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 9 00:45:25.756980 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 9 00:45:25.756985 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 9 00:45:25.756991 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 9 00:45:25.756996 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 9 00:45:25.757002 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 9 00:45:25.757007 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 9 00:45:25.757014 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 9 00:45:25.757020 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 9 00:45:25.757025 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 9 00:45:25.757030 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 9 00:45:25.757036 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 9 00:45:25.757041 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 9 00:45:25.757047 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 9 00:45:25.757052 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 9 00:45:25.757057 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 9 00:45:25.757063 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 9 00:45:25.757070 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 9 00:45:25.757075 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 9 00:45:25.757081 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 9 00:45:25.757086 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 9 00:45:25.757091 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 9 00:45:25.757097 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 9 00:45:25.757102 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 9 00:45:25.757108 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 9 00:45:25.757113 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 9 00:45:25.757120 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 9 00:45:25.757125 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 9 00:45:25.757131 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 9 00:45:25.757136 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 9 00:45:25.757142 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 9 00:45:25.757147 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 9 00:45:25.757153 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 9 00:45:25.757158 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 9 00:45:25.757164 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 9 00:45:25.757169 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 9 00:45:25.757176 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 9 00:45:25.757181 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 9 00:45:25.757187 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 9 00:45:25.757192 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 9 00:45:25.757198 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 9 00:45:25.757203 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 9 00:45:25.757209 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 9 00:45:25.757214 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 9 00:45:25.757220 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 9 00:45:25.757226 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 9 00:45:25.757232 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 9 00:45:25.757237 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 9 00:45:25.757243 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 9 00:45:25.757248 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 9 00:45:25.757253 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 9 00:45:25.757259 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 9 00:45:25.757264 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 9 00:45:25.757270 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 9 00:45:25.757275 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 9 00:45:25.757282 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 9 00:45:25.757288 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 9 00:45:25.757293 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 9 00:45:25.757298 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 9 00:45:25.757304 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 9 00:45:25.757310 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 9 00:45:25.757315 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 9 00:45:25.757321 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 9 00:45:25.757326 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 9 00:45:25.757331 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 9 00:45:25.757338 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 9 00:45:25.757344 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 9 00:45:25.757349 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 9 00:45:25.757355 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 9 00:45:25.757360 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 9 00:45:25.757365 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 9 00:45:25.757371 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 9 00:45:25.757376 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 9 00:45:25.757382 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 9 00:45:25.757388 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 9 00:45:25.757394 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 9 00:45:25.757400 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 9 00:45:25.757405 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 9 00:45:25.757410 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 9 00:45:25.757416 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 9 00:45:25.757421 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 9 00:45:25.757427 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 9 00:45:25.757432 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 9 00:45:25.757438 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 9 00:45:25.757445 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 9 00:45:25.757450 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 9 00:45:25.757456 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 9 00:45:25.757461 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 9 00:45:25.757466 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 9 00:45:25.757472 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 9 00:45:25.757478 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 9 00:45:25.757483 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 9 00:45:25.757489 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 9 00:45:25.757494 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 9 00:45:25.757501 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 9 00:45:25.757506 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 9 00:45:25.757512 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 9 00:45:25.757517 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 9 00:45:25.757523 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 9 00:45:25.757528 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 9 00:45:25.757534 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 9 00:45:25.757539 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 9 00:45:25.757545 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 9 00:45:25.757552 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 9 00:45:25.757557 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 9 00:45:25.757563 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 9 00:45:25.757568 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 9 00:45:25.757574 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 9 00:45:25.757579 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 9 00:45:25.757585 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 9 00:45:25.757591 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 9 00:45:25.757596 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 9 00:45:25.757602 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 9 00:45:25.757608 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 9 00:45:25.757614 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 9 00:45:25.757621 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 9 00:45:25.757631 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 9 00:45:25.757640 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 9 00:45:25.757650 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 9 00:45:25.757660 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:45:25.757671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 9 00:45:25.757680 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:45:25.757697 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 9 00:45:25.757706 kernel: TSC deadline timer available Sep 9 00:45:25.757715 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Sep 9 00:45:25.757723 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 9 00:45:25.757731 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 9 00:45:25.757740 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:45:25.757749 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 9 00:45:25.757755 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 9 00:45:25.757761 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 9 00:45:25.757767 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 9 00:45:25.757775 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 9 00:45:25.757780 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 9 00:45:25.757787 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 9 00:45:25.757794 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 9 00:45:25.757808 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 9 00:45:25.757815 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 9 00:45:25.757822 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 9 00:45:25.757830 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 9 00:45:25.757842 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 9 00:45:25.757852 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 9 00:45:25.757858 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 9 00:45:25.757863 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 9 00:45:25.757869 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 9 00:45:25.757875 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 9 00:45:25.757881 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 9 00:45:25.757888 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:45:25.757896 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:45:25.757901 kernel: random: crng init done Sep 9 00:45:25.757907 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 9 00:45:25.757913 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 9 00:45:25.757920 kernel: printk: log_buf_len min size: 262144 bytes Sep 9 00:45:25.757926 kernel: printk: log_buf_len: 1048576 bytes Sep 9 00:45:25.757931 kernel: printk: early log buf free: 239648(91%) Sep 9 00:45:25.757951 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:45:25.757958 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 00:45:25.757966 kernel: Fallback order for Node 0: 0 Sep 9 00:45:25.757972 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Sep 9 00:45:25.757978 kernel: Policy zone: DMA32 Sep 9 00:45:25.757988 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:45:25.757995 kernel: Memory: 1936360K/2096628K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42880K init, 2316K bss, 160008K reserved, 0K cma-reserved) Sep 9 00:45:25.758003 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 9 00:45:25.758011 kernel: ftrace: allocating 37969 entries in 149 pages Sep 9 00:45:25.758017 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 00:45:25.758023 kernel: Dynamic Preempt: voluntary Sep 9 00:45:25.758030 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:45:25.758036 kernel: rcu: RCU event tracing is enabled. Sep 9 00:45:25.758042 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 9 00:45:25.758049 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:45:25.758057 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:45:25.758063 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:45:25.758074 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:45:25.758080 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 9 00:45:25.758086 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 9 00:45:25.758092 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 9 00:45:25.758098 kernel: Console: colour VGA+ 80x25 Sep 9 00:45:25.758106 kernel: printk: console [tty0] enabled Sep 9 00:45:25.758112 kernel: printk: console [ttyS0] enabled Sep 9 00:45:25.758120 kernel: ACPI: Core revision 20230628 Sep 9 00:45:25.758127 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 9 00:45:25.758135 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:45:25.758141 kernel: x2apic enabled Sep 9 00:45:25.758146 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:45:25.758152 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:45:25.758159 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 00:45:25.758166 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 9 00:45:25.758172 kernel: Disabled fast string operations Sep 9 00:45:25.758178 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 9 00:45:25.758184 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 9 00:45:25.758192 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:45:25.758198 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 9 00:45:25.758204 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 9 00:45:25.758210 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 9 00:45:25.758216 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 9 00:45:25.758222 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 9 00:45:25.758228 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:45:25.758234 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:45:25.758240 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 00:45:25.758247 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 9 00:45:25.758253 kernel: GDS: Unknown: Dependent on hypervisor status Sep 9 00:45:25.758259 kernel: active return thunk: its_return_thunk Sep 9 00:45:25.758265 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 00:45:25.758271 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:45:25.758277 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:45:25.758282 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:45:25.758288 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:45:25.758294 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:45:25.758302 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:45:25.758308 kernel: pid_max: default: 131072 minimum: 1024 Sep 9 00:45:25.758314 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:45:25.758320 kernel: landlock: Up and running. Sep 9 00:45:25.758326 kernel: SELinux: Initializing. Sep 9 00:45:25.758332 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 00:45:25.758338 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 00:45:25.758344 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 9 00:45:25.758350 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:45:25.758357 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:45:25.758363 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:45:25.758369 kernel: Performance Events: Skylake events, core PMU driver. Sep 9 00:45:25.758375 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 9 00:45:25.758381 kernel: core: CPUID marked event: 'instructions' unavailable Sep 9 00:45:25.758388 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 9 00:45:25.758394 kernel: core: CPUID marked event: 'cache references' unavailable Sep 9 00:45:25.758400 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 9 00:45:25.758407 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 9 00:45:25.758412 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 9 00:45:25.758418 kernel: ... version: 1 Sep 9 00:45:25.758424 kernel: ... bit width: 48 Sep 9 00:45:25.758430 kernel: ... generic registers: 4 Sep 9 00:45:25.758436 kernel: ... value mask: 0000ffffffffffff Sep 9 00:45:25.758442 kernel: ... max period: 000000007fffffff Sep 9 00:45:25.758448 kernel: ... fixed-purpose events: 0 Sep 9 00:45:25.758453 kernel: ... event mask: 000000000000000f Sep 9 00:45:25.758460 kernel: signal: max sigframe size: 1776 Sep 9 00:45:25.758466 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:45:25.758473 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:45:25.758479 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 00:45:25.758485 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:45:25.758491 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:45:25.758496 kernel: .... node #0, CPUs: #1 Sep 9 00:45:25.758502 kernel: Disabled fast string operations Sep 9 00:45:25.758508 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Sep 9 00:45:25.758514 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 9 00:45:25.758521 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 00:45:25.758527 kernel: smpboot: Max logical packages: 128 Sep 9 00:45:25.758533 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 9 00:45:25.758538 kernel: devtmpfs: initialized Sep 9 00:45:25.758544 kernel: x86/mm: Memory block size: 128MB Sep 9 00:45:25.758551 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 9 00:45:25.758557 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:45:25.758563 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 9 00:45:25.758569 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:45:25.758576 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:45:25.758582 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:45:25.758588 kernel: audit: type=2000 audit(1757378724.096:1): state=initialized audit_enabled=0 res=1 Sep 9 00:45:25.758594 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:45:25.758600 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:45:25.758605 kernel: cpuidle: using governor menu Sep 9 00:45:25.758611 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 9 00:45:25.758617 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:45:25.758623 kernel: dca service started, version 1.12.1 Sep 9 00:45:25.758630 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Sep 9 00:45:25.758636 kernel: PCI: Using configuration type 1 for base access Sep 9 00:45:25.758642 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:45:25.758649 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:45:25.758655 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:45:25.758660 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:45:25.758666 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:45:25.758672 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:45:25.758678 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:45:25.758685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:45:25.758691 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:45:25.758697 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 9 00:45:25.758703 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 00:45:25.758709 kernel: ACPI: Interpreter enabled Sep 9 00:45:25.758715 kernel: ACPI: PM: (supports S0 S1 S5) Sep 9 00:45:25.758721 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:45:25.758727 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:45:25.758733 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:45:25.758740 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 9 00:45:25.758746 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 9 00:45:25.758842 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:45:25.758902 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 9 00:45:25.758979 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 9 00:45:25.758989 kernel: PCI host bridge to bus 0000:00 Sep 9 00:45:25.759047 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:45:25.759098 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 9 00:45:25.759144 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:45:25.759194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:45:25.759249 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 9 00:45:25.759298 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 9 00:45:25.759359 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Sep 9 00:45:25.759420 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Sep 9 00:45:25.759478 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Sep 9 00:45:25.759536 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Sep 9 00:45:25.759591 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Sep 9 00:45:25.759644 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 9 00:45:25.759696 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 9 00:45:25.759750 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 9 00:45:25.759804 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 9 00:45:25.759861 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Sep 9 00:45:25.759913 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 9 00:45:25.760127 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 9 00:45:25.760187 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Sep 9 00:45:25.760240 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Sep 9 00:45:25.760296 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Sep 9 00:45:25.760355 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Sep 9 00:45:25.760408 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Sep 9 00:45:25.760460 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Sep 9 00:45:25.760512 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Sep 9 00:45:25.760562 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Sep 9 00:45:25.760613 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:45:25.760673 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Sep 9 00:45:25.760736 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.760790 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.760846 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.760899 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.760978 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761281 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761344 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761398 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761455 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761508 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761566 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761623 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761679 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761739 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761797 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761850 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761906 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761996 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762054 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762107 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762163 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762217 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762274 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762330 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762386 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762440 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762495 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762548 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762603 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762659 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762720 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762773 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762828 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762881 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762966 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.763025 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764070 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764134 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764194 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764248 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764307 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764365 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764423 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764478 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764535 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764588 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764647 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764700 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764759 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764813 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764872 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764927 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766028 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766089 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766153 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766207 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766264 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766317 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766375 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766427 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766487 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766540 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766597 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766650 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766714 kernel: pci_bus 0000:01: extended config space not accessible Sep 9 00:45:25.766771 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:45:25.766826 kernel: pci_bus 0000:02: extended config space not accessible Sep 9 00:45:25.766837 kernel: acpiphp: Slot [32] registered Sep 9 00:45:25.766844 kernel: acpiphp: Slot [33] registered Sep 9 00:45:25.766850 kernel: acpiphp: Slot [34] registered Sep 9 00:45:25.766856 kernel: acpiphp: Slot [35] registered Sep 9 00:45:25.766862 kernel: acpiphp: Slot [36] registered Sep 9 00:45:25.766868 kernel: acpiphp: Slot [37] registered Sep 9 00:45:25.766873 kernel: acpiphp: Slot [38] registered Sep 9 00:45:25.766879 kernel: acpiphp: Slot [39] registered Sep 9 00:45:25.766887 kernel: acpiphp: Slot [40] registered Sep 9 00:45:25.766893 kernel: acpiphp: Slot [41] registered Sep 9 00:45:25.766898 kernel: acpiphp: Slot [42] registered Sep 9 00:45:25.766904 kernel: acpiphp: Slot [43] registered Sep 9 00:45:25.766910 kernel: acpiphp: Slot [44] registered Sep 9 00:45:25.766916 kernel: acpiphp: Slot [45] registered Sep 9 00:45:25.766922 kernel: acpiphp: Slot [46] registered Sep 9 00:45:25.766928 kernel: acpiphp: Slot [47] registered Sep 9 00:45:25.766941 kernel: acpiphp: Slot [48] registered Sep 9 00:45:25.766948 kernel: acpiphp: Slot [49] registered Sep 9 00:45:25.766955 kernel: acpiphp: Slot [50] registered Sep 9 00:45:25.766961 kernel: acpiphp: Slot [51] registered Sep 9 00:45:25.766967 kernel: acpiphp: Slot [52] registered Sep 9 00:45:25.766973 kernel: acpiphp: Slot [53] registered Sep 9 00:45:25.766979 kernel: acpiphp: Slot [54] registered Sep 9 00:45:25.766984 kernel: acpiphp: Slot [55] registered Sep 9 00:45:25.766990 kernel: acpiphp: Slot [56] registered Sep 9 00:45:25.766996 kernel: acpiphp: Slot [57] registered Sep 9 00:45:25.767002 kernel: acpiphp: Slot [58] registered Sep 9 00:45:25.767009 kernel: acpiphp: Slot [59] registered Sep 9 00:45:25.767015 kernel: acpiphp: Slot [60] registered Sep 9 00:45:25.767021 kernel: acpiphp: Slot [61] registered Sep 9 00:45:25.767027 kernel: acpiphp: Slot [62] registered Sep 9 00:45:25.767033 kernel: acpiphp: Slot [63] registered Sep 9 00:45:25.767088 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 9 00:45:25.767140 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 00:45:25.767192 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 00:45:25.767243 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:45:25.767298 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 9 00:45:25.767351 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 9 00:45:25.767403 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 9 00:45:25.767454 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 9 00:45:25.767505 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 9 00:45:25.767626 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Sep 9 00:45:25.767685 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Sep 9 00:45:25.767741 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 9 00:45:25.767794 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 9 00:45:25.767866 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.767927 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 00:45:25.770088 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:45:25.770152 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 00:45:25.770223 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 00:45:25.770294 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:45:25.770349 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 00:45:25.770401 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 00:45:25.770455 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:45:25.770511 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:45:25.770570 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 00:45:25.770636 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 00:45:25.770692 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:45:25.770766 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:45:25.770832 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 00:45:25.770888 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:45:25.771093 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:45:25.771170 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 00:45:25.771227 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:45:25.771295 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:45:25.771356 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 00:45:25.771415 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:45:25.771475 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:45:25.771528 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 00:45:25.771581 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:45:25.771638 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:45:25.771689 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 00:45:25.771741 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:45:25.771801 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Sep 9 00:45:25.771857 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Sep 9 00:45:25.771910 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Sep 9 00:45:25.773093 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Sep 9 00:45:25.773156 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Sep 9 00:45:25.773216 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 9 00:45:25.773271 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 9 00:45:25.773326 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 00:45:25.773379 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 00:45:25.773433 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:45:25.773486 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 00:45:25.773538 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 00:45:25.773594 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:45:25.773650 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 00:45:25.773709 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 00:45:25.773763 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:45:25.773819 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:45:25.773871 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 00:45:25.773924 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 00:45:25.773993 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:45:25.774053 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:45:25.774106 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 00:45:25.774159 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:45:25.774215 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:45:25.774279 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 00:45:25.774332 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:45:25.774387 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:45:25.774440 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 00:45:25.774496 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:45:25.774552 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:45:25.774604 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 00:45:25.774657 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:45:25.774712 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:45:25.774765 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 00:45:25.774818 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:45:25.774872 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:45:25.774927 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 00:45:25.775080 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 00:45:25.775135 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:45:25.775190 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:45:25.775244 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 00:45:25.775303 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 00:45:25.775356 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:45:25.775421 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:45:25.775495 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 00:45:25.775560 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 00:45:25.775624 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:45:25.775697 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:45:25.775760 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 00:45:25.775818 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:45:25.775882 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:45:25.779979 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 00:45:25.780076 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:45:25.780140 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:45:25.780211 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 00:45:25.780275 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:45:25.780355 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:45:25.780427 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 00:45:25.780511 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:45:25.780594 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:45:25.780678 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 00:45:25.780752 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:45:25.780825 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:45:25.780900 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 00:45:25.782982 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 00:45:25.783040 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:45:25.783096 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:45:25.783153 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 00:45:25.783220 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 00:45:25.783281 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:45:25.783336 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:45:25.783389 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 00:45:25.783441 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:45:25.783495 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:45:25.783548 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 00:45:25.783604 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:45:25.783659 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:45:25.783725 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 00:45:25.783779 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:45:25.783833 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:45:25.783886 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 00:45:25.783948 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:45:25.784007 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:45:25.784063 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 00:45:25.784115 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:45:25.784169 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:45:25.784221 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 00:45:25.784274 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:45:25.784284 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 9 00:45:25.784290 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 9 00:45:25.784296 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 9 00:45:25.784304 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:45:25.784310 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 9 00:45:25.784316 kernel: iommu: Default domain type: Translated Sep 9 00:45:25.784323 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:45:25.784329 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:45:25.784335 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:45:25.784341 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 9 00:45:25.784347 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 9 00:45:25.784400 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 9 00:45:25.784455 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 9 00:45:25.784506 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:45:25.784515 kernel: vgaarb: loaded Sep 9 00:45:25.784522 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 9 00:45:25.784528 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 9 00:45:25.784534 kernel: clocksource: Switched to clocksource tsc-early Sep 9 00:45:25.784540 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:45:25.784547 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:45:25.784553 kernel: pnp: PnP ACPI init Sep 9 00:45:25.784612 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 9 00:45:25.784662 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 9 00:45:25.784710 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 9 00:45:25.784761 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 9 00:45:25.784820 kernel: pnp 00:06: [dma 2] Sep 9 00:45:25.784876 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 9 00:45:25.787947 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 9 00:45:25.788045 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 9 00:45:25.788057 kernel: pnp: PnP ACPI: found 8 devices Sep 9 00:45:25.788069 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:45:25.788081 kernel: NET: Registered PF_INET protocol family Sep 9 00:45:25.788092 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:45:25.788102 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 00:45:25.788109 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:45:25.788115 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 00:45:25.788125 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 00:45:25.788131 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 00:45:25.788140 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 00:45:25.788149 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 00:45:25.788158 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:45:25.788169 kernel: NET: Registered PF_XDP protocol family Sep 9 00:45:25.788263 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 9 00:45:25.788331 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 00:45:25.788400 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 00:45:25.788479 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 00:45:25.788551 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 00:45:25.788642 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 9 00:45:25.788735 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 9 00:45:25.788816 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 9 00:45:25.788878 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 9 00:45:25.788983 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 9 00:45:25.789057 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 9 00:45:25.789141 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 9 00:45:25.789221 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 9 00:45:25.789281 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 9 00:45:25.789335 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 9 00:45:25.789416 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 9 00:45:25.789507 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 9 00:45:25.789574 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 9 00:45:25.789630 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 9 00:45:25.789687 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 9 00:45:25.789768 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 9 00:45:25.789829 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 9 00:45:25.789908 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 9 00:45:25.792825 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:45:25.792899 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:45:25.793028 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793105 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793172 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793236 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793298 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793354 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793411 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793478 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793554 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793612 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793673 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793756 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793834 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793898 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793985 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794046 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794110 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794179 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794253 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794326 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794391 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794445 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794499 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794557 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794625 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794707 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794777 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794832 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794889 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795296 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795370 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795443 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795517 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795602 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795659 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795718 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795784 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795844 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795913 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796013 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796086 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796156 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796221 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796284 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796349 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796415 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796479 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796542 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796615 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796707 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796781 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796839 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796893 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797018 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797097 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797161 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797216 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797308 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797424 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797496 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797570 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797627 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797701 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797781 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797853 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797908 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798033 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798115 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798191 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798269 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798324 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798378 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798432 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798485 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798548 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798629 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798694 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798757 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798816 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798874 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.800489 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.800555 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.800617 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.800675 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.800735 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:45:25.800793 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 9 00:45:25.800859 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 00:45:25.800912 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 00:45:25.801003 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:45:25.801075 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Sep 9 00:45:25.801134 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:45:25.801188 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 00:45:25.801242 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 00:45:25.801295 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:45:25.801370 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:45:25.801425 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 00:45:25.801505 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 00:45:25.801571 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:45:25.801633 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:45:25.801694 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 00:45:25.801748 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 00:45:25.801827 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:45:25.801887 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:45:25.802002 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 00:45:25.802058 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:45:25.802117 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:45:25.802171 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 00:45:25.802244 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:45:25.802312 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:45:25.802400 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 00:45:25.802468 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:45:25.802527 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:45:25.802580 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 00:45:25.802633 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:45:25.802688 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:45:25.802775 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 00:45:25.802853 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:45:25.802921 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Sep 9 00:45:25.803005 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:45:25.803059 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 00:45:25.803117 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 00:45:25.803170 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:45:25.803242 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:45:25.803297 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 00:45:25.803363 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 00:45:25.803448 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:45:25.803523 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:45:25.803579 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 00:45:25.803632 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 00:45:25.803686 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:45:25.803762 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:45:25.803824 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 00:45:25.803881 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:45:25.804002 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:45:25.804060 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 00:45:25.804114 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:45:25.804174 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:45:25.804228 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 00:45:25.804302 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:45:25.804372 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:45:25.804425 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 00:45:25.804478 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:45:25.804534 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:45:25.804588 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 00:45:25.804657 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:45:25.804734 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:45:25.804795 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 00:45:25.804849 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 00:45:25.804906 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:45:25.805032 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:45:25.805090 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 00:45:25.805143 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 00:45:25.805227 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:45:25.805294 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:45:25.805348 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 00:45:25.805400 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 00:45:25.805452 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:45:25.805546 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:45:25.805627 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 00:45:25.805691 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:45:25.805748 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:45:25.805801 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 00:45:25.805855 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:45:25.805910 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:45:25.806051 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 00:45:25.806128 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:45:25.806191 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:45:25.806244 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 00:45:25.806301 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:45:25.806355 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:45:25.806407 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 00:45:25.806460 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:45:25.806538 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:45:25.806600 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 00:45:25.806670 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 00:45:25.806743 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:45:25.806807 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:45:25.806865 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 00:45:25.806917 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 00:45:25.807024 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:45:25.807102 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:45:25.807175 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 00:45:25.807236 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:45:25.807291 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:45:25.807345 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 00:45:25.807398 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:45:25.807452 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:45:25.807524 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 00:45:25.807597 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:45:25.807660 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:45:25.807731 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 00:45:25.807786 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:45:25.807840 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:45:25.807894 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 00:45:25.808012 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:45:25.808084 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:45:25.808153 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 00:45:25.808215 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:45:25.808269 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 00:45:25.808317 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 00:45:25.808375 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 00:45:25.808447 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 9 00:45:25.808516 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 9 00:45:25.808572 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 9 00:45:25.808630 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 9 00:45:25.808688 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:45:25.808737 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 00:45:25.808801 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 00:45:25.808857 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 00:45:25.808909 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 9 00:45:25.808974 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 9 00:45:25.809031 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 9 00:45:25.809085 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 9 00:45:25.809150 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:45:25.809205 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 9 00:45:25.809274 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 9 00:45:25.809330 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:45:25.809382 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 9 00:45:25.809434 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 9 00:45:25.809501 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:45:25.809559 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 9 00:45:25.809617 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:45:25.809673 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 9 00:45:25.809732 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:45:25.809787 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 9 00:45:25.809853 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:45:25.809926 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 9 00:45:25.810019 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:45:25.810075 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 9 00:45:25.810132 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:45:25.810189 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 9 00:45:25.810241 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 9 00:45:25.810290 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:45:25.810369 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 9 00:45:25.810428 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 9 00:45:25.810477 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:45:25.810534 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 9 00:45:25.810607 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 9 00:45:25.810684 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:45:25.810745 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 9 00:45:25.810803 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:45:25.810859 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 9 00:45:25.810913 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:45:25.811024 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 9 00:45:25.811083 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:45:25.811144 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 9 00:45:25.811203 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:45:25.811275 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 9 00:45:25.811325 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:45:25.811377 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 9 00:45:25.811431 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 9 00:45:25.811483 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:45:25.811557 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 9 00:45:25.811611 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 9 00:45:25.811659 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:45:25.811719 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 9 00:45:25.811778 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 9 00:45:25.811836 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:45:25.811889 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 9 00:45:25.811971 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:45:25.812031 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 9 00:45:25.812093 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:45:25.812174 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 9 00:45:25.812231 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:45:25.812285 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 9 00:45:25.812338 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:45:25.812391 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 9 00:45:25.812439 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:45:25.812500 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 9 00:45:25.812559 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 9 00:45:25.812629 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:45:25.812684 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 9 00:45:25.812739 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 9 00:45:25.812787 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:45:25.812840 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 9 00:45:25.812892 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:45:25.812980 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 9 00:45:25.813042 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:45:25.813100 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 9 00:45:25.813150 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:45:25.813205 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 9 00:45:25.813262 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:45:25.813315 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 9 00:45:25.813364 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:45:25.813421 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 9 00:45:25.813491 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:45:25.813569 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 00:45:25.813583 kernel: PCI: CLS 32 bytes, default 64 Sep 9 00:45:25.813590 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 00:45:25.813597 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 00:45:25.813603 kernel: clocksource: Switched to clocksource tsc Sep 9 00:45:25.813610 kernel: Initialise system trusted keyrings Sep 9 00:45:25.813616 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 00:45:25.813622 kernel: Key type asymmetric registered Sep 9 00:45:25.813629 kernel: Asymmetric key parser 'x509' registered Sep 9 00:45:25.813635 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 00:45:25.813644 kernel: io scheduler mq-deadline registered Sep 9 00:45:25.813650 kernel: io scheduler kyber registered Sep 9 00:45:25.813656 kernel: io scheduler bfq registered Sep 9 00:45:25.813740 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 9 00:45:25.813813 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.813897 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 9 00:45:25.814000 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814059 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 9 00:45:25.814118 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814186 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 9 00:45:25.814253 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814322 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 9 00:45:25.814385 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814441 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 9 00:45:25.814499 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814556 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 9 00:45:25.814627 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814699 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 9 00:45:25.814776 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814840 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 9 00:45:25.814895 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815037 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 9 00:45:25.815113 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815178 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 9 00:45:25.815257 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815312 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 9 00:45:25.815370 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815425 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 9 00:45:25.815480 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815533 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 9 00:45:25.815596 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815656 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 9 00:45:25.815747 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815808 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 9 00:45:25.815863 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815920 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 9 00:45:25.816043 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816126 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 9 00:45:25.816211 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816301 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 9 00:45:25.816378 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816466 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 9 00:45:25.816550 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816612 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 9 00:45:25.816682 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816745 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 9 00:45:25.816799 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816854 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 9 00:45:25.816907 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817024 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 9 00:45:25.817084 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817138 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 9 00:45:25.817192 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817248 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 9 00:45:25.817322 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817439 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 9 00:45:25.817525 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817581 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 9 00:45:25.817636 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817697 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 9 00:45:25.817756 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817836 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 9 00:45:25.817899 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817970 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 9 00:45:25.818042 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.818103 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 9 00:45:25.818188 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.818199 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:45:25.818206 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:45:25.818213 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:45:25.818219 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 9 00:45:25.818225 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:45:25.818232 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:45:25.818289 kernel: rtc_cmos 00:01: registered as rtc0 Sep 9 00:45:25.818342 kernel: rtc_cmos 00:01: setting system clock to 2025-09-09T00:45:25 UTC (1757378725) Sep 9 00:45:25.818392 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 9 00:45:25.818405 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:45:25.818412 kernel: intel_pstate: CPU model not supported Sep 9 00:45:25.818418 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:45:25.818425 kernel: Segment Routing with IPv6 Sep 9 00:45:25.818431 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:45:25.818440 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:45:25.818449 kernel: Key type dns_resolver registered Sep 9 00:45:25.818455 kernel: IPI shorthand broadcast: enabled Sep 9 00:45:25.818466 kernel: sched_clock: Marking stable (964003644, 238433203)->(1269801772, -67364925) Sep 9 00:45:25.818477 kernel: registered taskstats version 1 Sep 9 00:45:25.818488 kernel: Loading compiled-in X.509 certificates Sep 9 00:45:25.818500 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: cc5240ef94b546331b2896cdc739274c03278c51' Sep 9 00:45:25.818510 kernel: Key type .fscrypt registered Sep 9 00:45:25.818518 kernel: Key type fscrypt-provisioning registered Sep 9 00:45:25.818524 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:45:25.818533 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:45:25.818544 kernel: ima: No architecture policies found Sep 9 00:45:25.818554 kernel: clk: Disabling unused clocks Sep 9 00:45:25.818565 kernel: Freeing unused kernel image (initmem) memory: 42880K Sep 9 00:45:25.818577 kernel: Write protecting the kernel read-only data: 36864k Sep 9 00:45:25.818588 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 9 00:45:25.818594 kernel: Run /init as init process Sep 9 00:45:25.818601 kernel: with arguments: Sep 9 00:45:25.818607 kernel: /init Sep 9 00:45:25.818616 kernel: with environment: Sep 9 00:45:25.818622 kernel: HOME=/ Sep 9 00:45:25.818628 kernel: TERM=linux Sep 9 00:45:25.818635 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:45:25.818643 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:45:25.818651 systemd[1]: Detected virtualization vmware. Sep 9 00:45:25.818658 systemd[1]: Detected architecture x86-64. Sep 9 00:45:25.818664 systemd[1]: Running in initrd. Sep 9 00:45:25.818672 systemd[1]: No hostname configured, using default hostname. Sep 9 00:45:25.818679 systemd[1]: Hostname set to . Sep 9 00:45:25.818685 systemd[1]: Initializing machine ID from random generator. Sep 9 00:45:25.818692 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:45:25.818699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:45:25.818709 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:45:25.818717 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:45:25.818724 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:45:25.818732 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:45:25.818738 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:45:25.818746 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:45:25.818753 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:45:25.818760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:45:25.818766 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:45:25.818774 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:45:25.818783 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:45:25.818790 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:45:25.818797 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:45:25.818809 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:45:25.818821 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:45:25.818832 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:45:25.818844 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:45:25.818856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:45:25.818865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:45:25.818871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:45:25.818878 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:45:25.818884 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:45:25.818891 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:45:25.818897 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:45:25.818904 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:45:25.818911 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:45:25.818917 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:45:25.818964 systemd-journald[216]: Collecting audit messages is disabled. Sep 9 00:45:25.818983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:45:25.818994 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:45:25.819003 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:45:25.819013 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:45:25.819020 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:45:25.819027 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:45:25.819036 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:45:25.819045 kernel: Bridge firewalling registered Sep 9 00:45:25.819052 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:45:25.819059 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:45:25.819066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:45:25.819072 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:45:25.819083 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:45:25.819095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:45:25.819108 systemd-journald[216]: Journal started Sep 9 00:45:25.819130 systemd-journald[216]: Runtime Journal (/run/log/journal/953cef3692ca4ad9a8128844f0d447fe) is 4.8M, max 38.6M, 33.8M free. Sep 9 00:45:25.770223 systemd-modules-load[217]: Inserted module 'overlay' Sep 9 00:45:25.800479 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 9 00:45:25.822947 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:45:25.829290 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:45:25.830022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:45:25.832032 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:45:25.834331 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:45:25.837164 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:45:25.842015 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:45:25.843729 dracut-cmdline[245]: dracut-dracut-053 Sep 9 00:45:25.845360 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:45:25.862644 systemd-resolved[253]: Positive Trust Anchors: Sep 9 00:45:25.862655 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:45:25.862686 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:45:25.865540 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 9 00:45:25.866264 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:45:25.866531 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:45:25.902961 kernel: SCSI subsystem initialized Sep 9 00:45:25.912948 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:45:25.919977 kernel: iscsi: registered transport (tcp) Sep 9 00:45:25.935950 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:45:25.935987 kernel: QLogic iSCSI HBA Driver Sep 9 00:45:25.956333 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:45:25.960155 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:45:25.976959 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:45:25.977006 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:45:25.977017 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:45:26.009956 kernel: raid6: avx2x4 gen() 44556 MB/s Sep 9 00:45:26.026953 kernel: raid6: avx2x2 gen() 51058 MB/s Sep 9 00:45:26.044135 kernel: raid6: avx2x1 gen() 42855 MB/s Sep 9 00:45:26.044173 kernel: raid6: using algorithm avx2x2 gen() 51058 MB/s Sep 9 00:45:26.062145 kernel: raid6: .... xor() 30921 MB/s, rmw enabled Sep 9 00:45:26.062185 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:45:26.076965 kernel: xor: automatically using best checksumming function avx Sep 9 00:45:26.181953 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:45:26.187866 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:45:26.192073 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:45:26.202194 systemd-udevd[434]: Using default interface naming scheme 'v255'. Sep 9 00:45:26.205257 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:45:26.213396 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:45:26.222975 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Sep 9 00:45:26.245143 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:45:26.249053 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:45:26.330819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:45:26.339051 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:45:26.348157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:45:26.350147 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:45:26.350336 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:45:26.350598 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:45:26.355074 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:45:26.362690 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:45:26.405111 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 9 00:45:26.405146 kernel: vmw_pvscsi: using 64bit dma Sep 9 00:45:26.414917 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Sep 9 00:45:26.414960 kernel: vmw_pvscsi: max_id: 16 Sep 9 00:45:26.414969 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 9 00:45:26.420992 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 9 00:45:26.421024 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 9 00:45:26.421033 kernel: vmw_pvscsi: using MSI-X Sep 9 00:45:26.424990 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 9 00:45:26.427954 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 9 00:45:26.427991 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:45:26.429948 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 9 00:45:26.432215 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:45:26.435349 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 9 00:45:26.435444 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 9 00:45:26.432286 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:45:26.435481 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:45:26.435589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:45:26.435681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:45:26.435806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:45:26.443309 kernel: AVX2 version of gcm_enc/dec engaged. Sep 9 00:45:26.443338 kernel: AES CTR mode by8 optimization enabled Sep 9 00:45:26.443943 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 9 00:45:26.446362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:45:26.447945 kernel: libata version 3.00 loaded. Sep 9 00:45:26.452335 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 9 00:45:26.452443 kernel: scsi host1: ata_piix Sep 9 00:45:26.452513 kernel: scsi host2: ata_piix Sep 9 00:45:26.455085 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Sep 9 00:45:26.455127 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Sep 9 00:45:26.464897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:45:26.469076 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:45:26.481741 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:45:26.624952 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 9 00:45:26.630987 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 9 00:45:26.643483 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 9 00:45:26.643613 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 9 00:45:26.643680 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 9 00:45:26.645166 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 9 00:45:26.645254 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 9 00:45:26.649953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:45:26.650951 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 9 00:45:26.653266 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 9 00:45:26.653378 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:45:26.666968 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:45:26.724138 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (482) Sep 9 00:45:26.725032 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 9 00:45:26.729002 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 9 00:45:26.731946 kernel: BTRFS: device fsid 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (480) Sep 9 00:45:26.736458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 9 00:45:26.740890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 9 00:45:26.741071 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 9 00:45:26.745029 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:45:26.799963 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:45:26.806145 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:45:27.835958 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:45:27.836308 disk-uuid[590]: The operation has completed successfully. Sep 9 00:45:27.876472 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:45:27.876563 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:45:27.881026 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:45:27.885367 sh[609]: Success Sep 9 00:45:27.900955 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 9 00:45:27.990915 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:45:27.998008 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:45:27.999969 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:45:28.051559 kernel: BTRFS info (device dm-0): first mount of filesystem 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a Sep 9 00:45:28.051600 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:45:28.051610 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:45:28.052814 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:45:28.053776 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:45:28.061958 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 00:45:28.063125 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:45:28.067112 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 9 00:45:28.068763 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:45:28.124727 kernel: BTRFS info (device sda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:28.124772 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:45:28.124791 kernel: BTRFS info (device sda6): using free space tree Sep 9 00:45:28.130955 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:45:28.141148 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:45:28.142064 kernel: BTRFS info (device sda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:28.145686 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:45:28.151126 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:45:28.170975 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 00:45:28.175078 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:45:28.255387 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:45:28.264767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:45:28.275601 systemd-networkd[799]: lo: Link UP Sep 9 00:45:28.275607 systemd-networkd[799]: lo: Gained carrier Sep 9 00:45:28.276354 systemd-networkd[799]: Enumeration completed Sep 9 00:45:28.276626 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 9 00:45:28.277408 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:45:28.277596 systemd[1]: Reached target network.target - Network. Sep 9 00:45:28.280019 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 9 00:45:28.280221 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 9 00:45:28.281351 systemd-networkd[799]: ens192: Link UP Sep 9 00:45:28.281358 systemd-networkd[799]: ens192: Gained carrier Sep 9 00:45:28.286735 ignition[668]: Ignition 2.19.0 Sep 9 00:45:28.287013 ignition[668]: Stage: fetch-offline Sep 9 00:45:28.287148 ignition[668]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:28.287283 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:28.287496 ignition[668]: parsed url from cmdline: "" Sep 9 00:45:28.287533 ignition[668]: no config URL provided Sep 9 00:45:28.287657 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:45:28.287806 ignition[668]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:45:28.288220 ignition[668]: config successfully fetched Sep 9 00:45:28.288248 ignition[668]: parsing config with SHA512: 9bca0ea0b62e526de0392b3c04e01bbc40151c7ed0c17bcb21fe2589aeebabe2b595169904593a9ae657b1a1ff9ff4c6f997f782ada1ece71540e4c00be1111a Sep 9 00:45:28.291155 unknown[668]: fetched base config from "system" Sep 9 00:45:28.291303 unknown[668]: fetched user config from "vmware" Sep 9 00:45:28.291715 ignition[668]: fetch-offline: fetch-offline passed Sep 9 00:45:28.291881 ignition[668]: Ignition finished successfully Sep 9 00:45:28.292708 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:45:28.293260 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:45:28.299097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:45:28.308157 ignition[804]: Ignition 2.19.0 Sep 9 00:45:28.308164 ignition[804]: Stage: kargs Sep 9 00:45:28.308276 ignition[804]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:28.308282 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:28.308945 ignition[804]: kargs: kargs passed Sep 9 00:45:28.308991 ignition[804]: Ignition finished successfully Sep 9 00:45:28.310225 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:45:28.314073 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:45:28.322322 ignition[810]: Ignition 2.19.0 Sep 9 00:45:28.322333 ignition[810]: Stage: disks Sep 9 00:45:28.322442 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:28.322449 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:28.323035 ignition[810]: disks: disks passed Sep 9 00:45:28.323066 ignition[810]: Ignition finished successfully Sep 9 00:45:28.324044 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:45:28.324345 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:45:28.324553 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:45:28.324785 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:45:28.324998 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:45:28.325201 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:45:28.330021 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:45:28.353314 systemd-fsck[819]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 9 00:45:28.354874 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:45:28.359045 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:45:28.430953 kernel: EXT4-fs (sda9): mounted filesystem ee55a213-d578-493d-a79b-e10c399cd35c r/w with ordered data mode. Quota mode: none. Sep 9 00:45:28.430918 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:45:28.431280 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:45:28.438039 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:45:28.439532 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:45:28.439923 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:45:28.440016 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:45:28.440033 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:45:28.444100 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:45:28.444735 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:45:28.450949 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (827) Sep 9 00:45:28.453007 kernel: BTRFS info (device sda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:28.453067 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:45:28.454947 kernel: BTRFS info (device sda6): using free space tree Sep 9 00:45:28.477104 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:45:28.478082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:45:28.497478 initrd-setup-root[851]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:45:28.500470 initrd-setup-root[858]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:45:28.503413 initrd-setup-root[865]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:45:28.505962 initrd-setup-root[872]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:45:28.835929 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:45:28.840035 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:45:28.842477 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:45:28.845966 kernel: BTRFS info (device sda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:28.865920 ignition[939]: INFO : Ignition 2.19.0 Sep 9 00:45:28.865920 ignition[939]: INFO : Stage: mount Sep 9 00:45:28.866339 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:28.866339 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:28.866724 ignition[939]: INFO : mount: mount passed Sep 9 00:45:28.867271 ignition[939]: INFO : Ignition finished successfully Sep 9 00:45:28.867549 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:45:28.871042 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:45:28.937258 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:45:29.049428 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:45:29.054086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:45:29.062134 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (951) Sep 9 00:45:29.064796 kernel: BTRFS info (device sda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:29.064829 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:45:29.064841 kernel: BTRFS info (device sda6): using free space tree Sep 9 00:45:29.089981 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:45:29.093672 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:45:29.111307 ignition[968]: INFO : Ignition 2.19.0 Sep 9 00:45:29.111307 ignition[968]: INFO : Stage: files Sep 9 00:45:29.111847 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:29.111847 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:29.112283 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:45:29.112584 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:45:29.112584 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:45:29.130548 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:45:29.130859 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:45:29.131149 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:45:29.131079 unknown[968]: wrote ssh authorized keys file for user: core Sep 9 00:45:29.144812 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:45:29.144812 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 00:45:29.201744 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:45:29.372987 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:45:29.372987 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:45:29.372987 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:45:29.372987 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:45:29.845456 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:45:29.858112 systemd-networkd[799]: ens192: Gained IPv6LL Sep 9 00:45:30.320194 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:45:30.320580 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 00:45:30.320580 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 00:45:30.320580 ignition[968]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:45:30.328533 ignition[968]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:45:30.328705 ignition[968]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:45:30.328705 ignition[968]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:45:30.328705 ignition[968]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:45:30.328705 ignition[968]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:45:30.329286 ignition[968]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:45:30.329286 ignition[968]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:45:30.329286 ignition[968]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:45:30.419685 ignition[968]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:45:30.422371 ignition[968]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:45:30.422576 ignition[968]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:45:30.422576 ignition[968]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:45:30.422576 ignition[968]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:45:30.422576 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:45:30.423271 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:45:30.423271 ignition[968]: INFO : files: files passed Sep 9 00:45:30.423271 ignition[968]: INFO : Ignition finished successfully Sep 9 00:45:30.424590 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:45:30.428017 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:45:30.430012 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:45:30.431105 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:45:30.431164 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:45:30.437591 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:45:30.437591 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:45:30.438179 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:45:30.438833 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:45:30.439248 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:45:30.441023 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:45:30.455159 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:45:30.455225 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:45:30.455496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:45:30.455624 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:45:30.455819 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:45:30.456256 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:45:30.465915 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:45:30.469071 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:45:30.476210 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:45:30.476413 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:45:30.476681 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:45:30.476842 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:45:30.476916 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:45:30.477347 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:45:30.477543 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:45:30.477779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:45:30.477962 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:45:30.478154 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:45:30.478517 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:45:30.478715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:45:30.478929 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:45:30.479146 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:45:30.479359 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:45:30.479520 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:45:30.479583 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:45:30.479850 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:45:30.480105 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:45:30.480335 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:45:30.480378 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:45:30.480531 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:45:30.480593 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:45:30.480862 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:45:30.480926 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:45:30.481166 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:45:30.481312 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:45:30.485955 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:45:30.486138 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:45:30.486357 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:45:30.486516 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:45:30.486565 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:45:30.486722 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:45:30.486769 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:45:30.486955 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:45:30.487019 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:45:30.487259 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:45:30.487319 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:45:30.496071 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:45:30.496419 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:45:30.496523 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:45:30.499127 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:45:30.499242 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:45:30.499322 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:45:30.499488 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:45:30.499582 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:45:30.502653 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:45:30.502711 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:45:30.505653 ignition[1022]: INFO : Ignition 2.19.0 Sep 9 00:45:30.505653 ignition[1022]: INFO : Stage: umount Sep 9 00:45:30.509750 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:30.509750 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:30.509750 ignition[1022]: INFO : umount: umount passed Sep 9 00:45:30.509750 ignition[1022]: INFO : Ignition finished successfully Sep 9 00:45:30.508102 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:45:30.508163 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:45:30.508373 systemd[1]: Stopped target network.target - Network. Sep 9 00:45:30.508455 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:45:30.508482 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:45:30.508582 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:45:30.508604 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:45:30.508703 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:45:30.508726 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:45:30.508822 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:45:30.508845 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:45:30.509044 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:45:30.509180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:45:30.514322 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:45:30.514531 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:45:30.515598 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:45:30.515808 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:45:30.516471 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:45:30.516498 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:45:30.521069 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:45:30.521175 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:45:30.521212 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:45:30.521349 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 9 00:45:30.521372 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 00:45:30.521485 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:45:30.521507 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:45:30.521608 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:45:30.521628 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:45:30.521730 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:45:30.521750 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:45:30.521911 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:45:30.522610 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:45:30.530758 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:45:30.530977 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:45:30.532327 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:45:30.532401 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:45:30.532616 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:45:30.532639 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:45:30.532745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:45:30.532762 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:45:30.532854 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:45:30.532877 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:45:30.533120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:45:30.533142 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:45:30.533448 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:45:30.533471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:45:30.542073 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:45:30.542436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:45:30.542475 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:45:30.542628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:45:30.542659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:45:30.546163 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:45:30.546242 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:45:30.583549 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:45:30.583627 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:45:30.583897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:45:30.584016 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:45:30.584042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:45:30.588043 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:45:30.599683 systemd[1]: Switching root. Sep 9 00:45:30.624209 systemd-journald[216]: Journal stopped Sep 9 00:45:25.755776 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:41:17 -00 2025 Sep 9 00:45:25.755793 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:45:25.755799 kernel: Disabled fast string operations Sep 9 00:45:25.755803 kernel: BIOS-provided physical RAM map: Sep 9 00:45:25.755807 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 9 00:45:25.755811 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 9 00:45:25.755817 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 9 00:45:25.755822 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 9 00:45:25.755826 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 9 00:45:25.755830 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 9 00:45:25.755834 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 9 00:45:25.755838 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 9 00:45:25.755843 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 9 00:45:25.755847 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 9 00:45:25.755853 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 9 00:45:25.755858 kernel: NX (Execute Disable) protection: active Sep 9 00:45:25.755863 kernel: APIC: Static calls initialized Sep 9 00:45:25.755868 kernel: SMBIOS 2.7 present. Sep 9 00:45:25.755873 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 9 00:45:25.755877 kernel: vmware: hypercall mode: 0x00 Sep 9 00:45:25.755882 kernel: Hypervisor detected: VMware Sep 9 00:45:25.755887 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 9 00:45:25.755893 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 9 00:45:25.755897 kernel: vmware: using clock offset of 5475569267 ns Sep 9 00:45:25.755902 kernel: tsc: Detected 3408.000 MHz processor Sep 9 00:45:25.755907 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:45:25.755912 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:45:25.755917 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 9 00:45:25.755922 kernel: total RAM covered: 3072M Sep 9 00:45:25.755927 kernel: Found optimal setting for mtrr clean up Sep 9 00:45:25.755939 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 9 00:45:25.755947 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 9 00:45:25.755952 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:45:25.755957 kernel: Using GB pages for direct mapping Sep 9 00:45:25.755962 kernel: ACPI: Early table checksum verification disabled Sep 9 00:45:25.755967 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 9 00:45:25.755972 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 9 00:45:25.755977 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 9 00:45:25.755982 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 9 00:45:25.755987 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 00:45:25.755994 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 00:45:25.756000 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 9 00:45:25.756005 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 9 00:45:25.756011 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 9 00:45:25.756016 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 9 00:45:25.756022 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 9 00:45:25.756027 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 9 00:45:25.756033 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 9 00:45:25.756038 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 9 00:45:25.756051 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 00:45:25.756056 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 00:45:25.756061 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 9 00:45:25.756067 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 9 00:45:25.756072 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 9 00:45:25.756077 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 9 00:45:25.756084 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 9 00:45:25.756089 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 9 00:45:25.756094 kernel: system APIC only can use physical flat Sep 9 00:45:25.756099 kernel: APIC: Switched APIC routing to: physical flat Sep 9 00:45:25.756104 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 9 00:45:25.756109 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 9 00:45:25.756114 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 9 00:45:25.756119 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 9 00:45:25.756124 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 9 00:45:25.756131 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 9 00:45:25.756136 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 9 00:45:25.756141 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 9 00:45:25.756146 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Sep 9 00:45:25.756151 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Sep 9 00:45:25.756156 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Sep 9 00:45:25.756161 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Sep 9 00:45:25.756166 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Sep 9 00:45:25.756171 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Sep 9 00:45:25.756176 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Sep 9 00:45:25.756181 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Sep 9 00:45:25.756187 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Sep 9 00:45:25.756192 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Sep 9 00:45:25.756198 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Sep 9 00:45:25.756202 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Sep 9 00:45:25.756207 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Sep 9 00:45:25.756213 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Sep 9 00:45:25.756218 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Sep 9 00:45:25.756223 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Sep 9 00:45:25.756228 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Sep 9 00:45:25.756233 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Sep 9 00:45:25.756239 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Sep 9 00:45:25.756244 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Sep 9 00:45:25.756249 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Sep 9 00:45:25.756254 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Sep 9 00:45:25.756259 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Sep 9 00:45:25.756264 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Sep 9 00:45:25.756269 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Sep 9 00:45:25.756274 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Sep 9 00:45:25.756279 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Sep 9 00:45:25.756285 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Sep 9 00:45:25.756291 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Sep 9 00:45:25.756296 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Sep 9 00:45:25.756301 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Sep 9 00:45:25.756306 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Sep 9 00:45:25.756311 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Sep 9 00:45:25.756316 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Sep 9 00:45:25.756321 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Sep 9 00:45:25.756326 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Sep 9 00:45:25.756331 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Sep 9 00:45:25.756336 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Sep 9 00:45:25.756342 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Sep 9 00:45:25.756347 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Sep 9 00:45:25.756352 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Sep 9 00:45:25.756357 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Sep 9 00:45:25.756362 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Sep 9 00:45:25.756368 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Sep 9 00:45:25.756372 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Sep 9 00:45:25.756378 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Sep 9 00:45:25.756382 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Sep 9 00:45:25.756388 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Sep 9 00:45:25.756394 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Sep 9 00:45:25.756399 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Sep 9 00:45:25.756404 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Sep 9 00:45:25.756414 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Sep 9 00:45:25.756419 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Sep 9 00:45:25.756425 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Sep 9 00:45:25.756430 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Sep 9 00:45:25.756436 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Sep 9 00:45:25.756441 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Sep 9 00:45:25.756448 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Sep 9 00:45:25.756453 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Sep 9 00:45:25.756459 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Sep 9 00:45:25.756464 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Sep 9 00:45:25.756470 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Sep 9 00:45:25.756475 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Sep 9 00:45:25.756481 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Sep 9 00:45:25.756486 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Sep 9 00:45:25.756491 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Sep 9 00:45:25.756496 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Sep 9 00:45:25.756503 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Sep 9 00:45:25.756509 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Sep 9 00:45:25.756514 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Sep 9 00:45:25.756519 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Sep 9 00:45:25.756524 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Sep 9 00:45:25.756530 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Sep 9 00:45:25.756535 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Sep 9 00:45:25.756541 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Sep 9 00:45:25.756546 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Sep 9 00:45:25.756551 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Sep 9 00:45:25.756558 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Sep 9 00:45:25.756563 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Sep 9 00:45:25.756569 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Sep 9 00:45:25.756574 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Sep 9 00:45:25.756579 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Sep 9 00:45:25.756585 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Sep 9 00:45:25.756590 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Sep 9 00:45:25.756596 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Sep 9 00:45:25.756601 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Sep 9 00:45:25.756606 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Sep 9 00:45:25.756613 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Sep 9 00:45:25.756618 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Sep 9 00:45:25.756624 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Sep 9 00:45:25.756629 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Sep 9 00:45:25.756634 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Sep 9 00:45:25.756639 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Sep 9 00:45:25.756645 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Sep 9 00:45:25.756650 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Sep 9 00:45:25.756655 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Sep 9 00:45:25.756661 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Sep 9 00:45:25.756667 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Sep 9 00:45:25.756673 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Sep 9 00:45:25.756678 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Sep 9 00:45:25.756684 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Sep 9 00:45:25.756689 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Sep 9 00:45:25.756694 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Sep 9 00:45:25.756700 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Sep 9 00:45:25.756705 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Sep 9 00:45:25.756710 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Sep 9 00:45:25.756716 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Sep 9 00:45:25.756722 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Sep 9 00:45:25.756728 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Sep 9 00:45:25.756733 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Sep 9 00:45:25.756739 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Sep 9 00:45:25.756744 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Sep 9 00:45:25.756749 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Sep 9 00:45:25.756755 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Sep 9 00:45:25.756760 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Sep 9 00:45:25.756765 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Sep 9 00:45:25.756771 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Sep 9 00:45:25.756776 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Sep 9 00:45:25.756782 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Sep 9 00:45:25.756788 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Sep 9 00:45:25.756793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 00:45:25.756799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 00:45:25.756804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 9 00:45:25.756810 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Sep 9 00:45:25.756816 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Sep 9 00:45:25.756821 kernel: Zone ranges: Sep 9 00:45:25.756827 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:45:25.756833 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 9 00:45:25.756839 kernel: Normal empty Sep 9 00:45:25.756845 kernel: Movable zone start for each node Sep 9 00:45:25.756850 kernel: Early memory node ranges Sep 9 00:45:25.756856 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 9 00:45:25.756861 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 9 00:45:25.756867 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 9 00:45:25.756872 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 9 00:45:25.756878 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:45:25.756883 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 9 00:45:25.756890 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 9 00:45:25.756896 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 9 00:45:25.756901 kernel: system APIC only can use physical flat Sep 9 00:45:25.756906 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 9 00:45:25.756912 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 9 00:45:25.756918 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 9 00:45:25.756923 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 9 00:45:25.756928 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 9 00:45:25.756949 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 9 00:45:25.756958 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 9 00:45:25.756963 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 9 00:45:25.756969 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 9 00:45:25.756974 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 9 00:45:25.756980 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 9 00:45:25.756985 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 9 00:45:25.756991 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 9 00:45:25.756996 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 9 00:45:25.757002 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 9 00:45:25.757007 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 9 00:45:25.757014 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 9 00:45:25.757020 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 9 00:45:25.757025 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 9 00:45:25.757030 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 9 00:45:25.757036 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 9 00:45:25.757041 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 9 00:45:25.757047 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 9 00:45:25.757052 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 9 00:45:25.757057 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 9 00:45:25.757063 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 9 00:45:25.757070 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 9 00:45:25.757075 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 9 00:45:25.757081 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 9 00:45:25.757086 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 9 00:45:25.757091 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 9 00:45:25.757097 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 9 00:45:25.757102 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 9 00:45:25.757108 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 9 00:45:25.757113 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 9 00:45:25.757120 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 9 00:45:25.757125 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 9 00:45:25.757131 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 9 00:45:25.757136 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 9 00:45:25.757142 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 9 00:45:25.757147 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 9 00:45:25.757153 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 9 00:45:25.757158 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 9 00:45:25.757164 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 9 00:45:25.757169 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 9 00:45:25.757176 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 9 00:45:25.757181 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 9 00:45:25.757187 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 9 00:45:25.757192 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 9 00:45:25.757198 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 9 00:45:25.757203 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 9 00:45:25.757209 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 9 00:45:25.757214 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 9 00:45:25.757220 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 9 00:45:25.757226 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 9 00:45:25.757232 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 9 00:45:25.757237 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 9 00:45:25.757243 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 9 00:45:25.757248 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 9 00:45:25.757253 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 9 00:45:25.757259 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 9 00:45:25.757264 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 9 00:45:25.757270 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 9 00:45:25.757275 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 9 00:45:25.757282 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 9 00:45:25.757288 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 9 00:45:25.757293 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 9 00:45:25.757298 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 9 00:45:25.757304 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 9 00:45:25.757310 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 9 00:45:25.757315 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 9 00:45:25.757321 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 9 00:45:25.757326 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 9 00:45:25.757331 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 9 00:45:25.757338 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 9 00:45:25.757344 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 9 00:45:25.757349 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 9 00:45:25.757355 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 9 00:45:25.757360 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 9 00:45:25.757365 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 9 00:45:25.757371 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 9 00:45:25.757376 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 9 00:45:25.757382 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 9 00:45:25.757388 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 9 00:45:25.757394 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 9 00:45:25.757400 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 9 00:45:25.757405 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 9 00:45:25.757410 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 9 00:45:25.757416 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 9 00:45:25.757421 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 9 00:45:25.757427 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 9 00:45:25.757432 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 9 00:45:25.757438 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 9 00:45:25.757445 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 9 00:45:25.757450 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 9 00:45:25.757456 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 9 00:45:25.757461 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 9 00:45:25.757466 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 9 00:45:25.757472 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 9 00:45:25.757478 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 9 00:45:25.757483 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 9 00:45:25.757489 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 9 00:45:25.757494 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 9 00:45:25.757501 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 9 00:45:25.757506 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 9 00:45:25.757512 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 9 00:45:25.757517 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 9 00:45:25.757523 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 9 00:45:25.757528 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 9 00:45:25.757534 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 9 00:45:25.757539 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 9 00:45:25.757545 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 9 00:45:25.757552 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 9 00:45:25.757557 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 9 00:45:25.757563 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 9 00:45:25.757568 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 9 00:45:25.757574 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 9 00:45:25.757579 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 9 00:45:25.757585 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 9 00:45:25.757591 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 9 00:45:25.757596 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 9 00:45:25.757602 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 9 00:45:25.757608 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 9 00:45:25.757614 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 9 00:45:25.757621 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 9 00:45:25.757631 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 9 00:45:25.757640 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 9 00:45:25.757650 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 9 00:45:25.757660 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:45:25.757671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 9 00:45:25.757680 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:45:25.757697 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 9 00:45:25.757706 kernel: TSC deadline timer available Sep 9 00:45:25.757715 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Sep 9 00:45:25.757723 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 9 00:45:25.757731 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 9 00:45:25.757740 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:45:25.757749 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 9 00:45:25.757755 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 9 00:45:25.757761 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 9 00:45:25.757767 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 9 00:45:25.757775 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 9 00:45:25.757780 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 9 00:45:25.757787 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 9 00:45:25.757794 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 9 00:45:25.757808 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 9 00:45:25.757815 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 9 00:45:25.757822 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 9 00:45:25.757830 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 9 00:45:25.757842 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 9 00:45:25.757852 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 9 00:45:25.757858 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 9 00:45:25.757863 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 9 00:45:25.757869 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 9 00:45:25.757875 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 9 00:45:25.757881 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 9 00:45:25.757888 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:45:25.757896 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:45:25.757901 kernel: random: crng init done Sep 9 00:45:25.757907 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 9 00:45:25.757913 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 9 00:45:25.757920 kernel: printk: log_buf_len min size: 262144 bytes Sep 9 00:45:25.757926 kernel: printk: log_buf_len: 1048576 bytes Sep 9 00:45:25.757931 kernel: printk: early log buf free: 239648(91%) Sep 9 00:45:25.757951 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:45:25.757958 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 00:45:25.757966 kernel: Fallback order for Node 0: 0 Sep 9 00:45:25.757972 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Sep 9 00:45:25.757978 kernel: Policy zone: DMA32 Sep 9 00:45:25.757988 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:45:25.757995 kernel: Memory: 1936360K/2096628K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42880K init, 2316K bss, 160008K reserved, 0K cma-reserved) Sep 9 00:45:25.758003 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 9 00:45:25.758011 kernel: ftrace: allocating 37969 entries in 149 pages Sep 9 00:45:25.758017 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 00:45:25.758023 kernel: Dynamic Preempt: voluntary Sep 9 00:45:25.758030 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:45:25.758036 kernel: rcu: RCU event tracing is enabled. Sep 9 00:45:25.758042 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 9 00:45:25.758049 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:45:25.758057 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:45:25.758063 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:45:25.758074 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:45:25.758080 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 9 00:45:25.758086 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 9 00:45:25.758092 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 9 00:45:25.758098 kernel: Console: colour VGA+ 80x25 Sep 9 00:45:25.758106 kernel: printk: console [tty0] enabled Sep 9 00:45:25.758112 kernel: printk: console [ttyS0] enabled Sep 9 00:45:25.758120 kernel: ACPI: Core revision 20230628 Sep 9 00:45:25.758127 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 9 00:45:25.758135 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:45:25.758141 kernel: x2apic enabled Sep 9 00:45:25.758146 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:45:25.758152 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:45:25.758159 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 00:45:25.758166 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 9 00:45:25.758172 kernel: Disabled fast string operations Sep 9 00:45:25.758178 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 9 00:45:25.758184 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 9 00:45:25.758192 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:45:25.758198 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 9 00:45:25.758204 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 9 00:45:25.758210 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 9 00:45:25.758216 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 9 00:45:25.758222 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 9 00:45:25.758228 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:45:25.758234 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:45:25.758240 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 00:45:25.758247 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 9 00:45:25.758253 kernel: GDS: Unknown: Dependent on hypervisor status Sep 9 00:45:25.758259 kernel: active return thunk: its_return_thunk Sep 9 00:45:25.758265 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 00:45:25.758271 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:45:25.758277 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:45:25.758282 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:45:25.758288 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:45:25.758294 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:45:25.758302 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:45:25.758308 kernel: pid_max: default: 131072 minimum: 1024 Sep 9 00:45:25.758314 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:45:25.758320 kernel: landlock: Up and running. Sep 9 00:45:25.758326 kernel: SELinux: Initializing. Sep 9 00:45:25.758332 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 00:45:25.758338 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 00:45:25.758344 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 9 00:45:25.758350 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:45:25.758357 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:45:25.758363 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:45:25.758369 kernel: Performance Events: Skylake events, core PMU driver. Sep 9 00:45:25.758375 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 9 00:45:25.758381 kernel: core: CPUID marked event: 'instructions' unavailable Sep 9 00:45:25.758388 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 9 00:45:25.758394 kernel: core: CPUID marked event: 'cache references' unavailable Sep 9 00:45:25.758400 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 9 00:45:25.758407 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 9 00:45:25.758412 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 9 00:45:25.758418 kernel: ... version: 1 Sep 9 00:45:25.758424 kernel: ... bit width: 48 Sep 9 00:45:25.758430 kernel: ... generic registers: 4 Sep 9 00:45:25.758436 kernel: ... value mask: 0000ffffffffffff Sep 9 00:45:25.758442 kernel: ... max period: 000000007fffffff Sep 9 00:45:25.758448 kernel: ... fixed-purpose events: 0 Sep 9 00:45:25.758453 kernel: ... event mask: 000000000000000f Sep 9 00:45:25.758460 kernel: signal: max sigframe size: 1776 Sep 9 00:45:25.758466 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:45:25.758473 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:45:25.758479 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 00:45:25.758485 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:45:25.758491 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:45:25.758496 kernel: .... node #0, CPUs: #1 Sep 9 00:45:25.758502 kernel: Disabled fast string operations Sep 9 00:45:25.758508 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Sep 9 00:45:25.758514 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 9 00:45:25.758521 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 00:45:25.758527 kernel: smpboot: Max logical packages: 128 Sep 9 00:45:25.758533 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 9 00:45:25.758538 kernel: devtmpfs: initialized Sep 9 00:45:25.758544 kernel: x86/mm: Memory block size: 128MB Sep 9 00:45:25.758551 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 9 00:45:25.758557 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:45:25.758563 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 9 00:45:25.758569 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:45:25.758576 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:45:25.758582 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:45:25.758588 kernel: audit: type=2000 audit(1757378724.096:1): state=initialized audit_enabled=0 res=1 Sep 9 00:45:25.758594 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:45:25.758600 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:45:25.758605 kernel: cpuidle: using governor menu Sep 9 00:45:25.758611 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 9 00:45:25.758617 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:45:25.758623 kernel: dca service started, version 1.12.1 Sep 9 00:45:25.758630 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Sep 9 00:45:25.758636 kernel: PCI: Using configuration type 1 for base access Sep 9 00:45:25.758642 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:45:25.758649 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:45:25.758655 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:45:25.758660 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:45:25.758666 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:45:25.758672 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:45:25.758678 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:45:25.758685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:45:25.758691 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:45:25.758697 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 9 00:45:25.758703 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 00:45:25.758709 kernel: ACPI: Interpreter enabled Sep 9 00:45:25.758715 kernel: ACPI: PM: (supports S0 S1 S5) Sep 9 00:45:25.758721 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:45:25.758727 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:45:25.758733 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:45:25.758740 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 9 00:45:25.758746 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 9 00:45:25.758842 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:45:25.758902 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 9 00:45:25.758979 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 9 00:45:25.758989 kernel: PCI host bridge to bus 0000:00 Sep 9 00:45:25.759047 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:45:25.759098 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 9 00:45:25.759144 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:45:25.759194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:45:25.759249 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 9 00:45:25.759298 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 9 00:45:25.759359 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Sep 9 00:45:25.759420 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Sep 9 00:45:25.759478 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Sep 9 00:45:25.759536 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Sep 9 00:45:25.759591 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Sep 9 00:45:25.759644 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 9 00:45:25.759696 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 9 00:45:25.759750 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 9 00:45:25.759804 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 9 00:45:25.759861 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Sep 9 00:45:25.759913 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 9 00:45:25.760127 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 9 00:45:25.760187 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Sep 9 00:45:25.760240 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Sep 9 00:45:25.760296 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Sep 9 00:45:25.760355 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Sep 9 00:45:25.760408 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Sep 9 00:45:25.760460 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Sep 9 00:45:25.760512 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Sep 9 00:45:25.760562 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Sep 9 00:45:25.760613 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:45:25.760673 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Sep 9 00:45:25.760736 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.760790 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.760846 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.760899 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.760978 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761281 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761344 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761398 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761455 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761508 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761566 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761623 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761679 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761739 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761797 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761850 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.761906 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.761996 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762054 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762107 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762163 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762217 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762274 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762330 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762386 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762440 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762495 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762548 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762603 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762659 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762720 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762773 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762828 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.762881 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.762966 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.763025 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764070 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764134 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764194 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764248 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764307 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764365 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764423 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764478 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764535 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764588 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764647 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764700 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764759 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764813 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.764872 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.764927 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766028 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766089 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766153 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766207 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766264 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766317 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766375 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766427 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766487 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766540 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766597 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Sep 9 00:45:25.766650 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.766714 kernel: pci_bus 0000:01: extended config space not accessible Sep 9 00:45:25.766771 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:45:25.766826 kernel: pci_bus 0000:02: extended config space not accessible Sep 9 00:45:25.766837 kernel: acpiphp: Slot [32] registered Sep 9 00:45:25.766844 kernel: acpiphp: Slot [33] registered Sep 9 00:45:25.766850 kernel: acpiphp: Slot [34] registered Sep 9 00:45:25.766856 kernel: acpiphp: Slot [35] registered Sep 9 00:45:25.766862 kernel: acpiphp: Slot [36] registered Sep 9 00:45:25.766868 kernel: acpiphp: Slot [37] registered Sep 9 00:45:25.766873 kernel: acpiphp: Slot [38] registered Sep 9 00:45:25.766879 kernel: acpiphp: Slot [39] registered Sep 9 00:45:25.766887 kernel: acpiphp: Slot [40] registered Sep 9 00:45:25.766893 kernel: acpiphp: Slot [41] registered Sep 9 00:45:25.766898 kernel: acpiphp: Slot [42] registered Sep 9 00:45:25.766904 kernel: acpiphp: Slot [43] registered Sep 9 00:45:25.766910 kernel: acpiphp: Slot [44] registered Sep 9 00:45:25.766916 kernel: acpiphp: Slot [45] registered Sep 9 00:45:25.766922 kernel: acpiphp: Slot [46] registered Sep 9 00:45:25.766928 kernel: acpiphp: Slot [47] registered Sep 9 00:45:25.766941 kernel: acpiphp: Slot [48] registered Sep 9 00:45:25.766948 kernel: acpiphp: Slot [49] registered Sep 9 00:45:25.766955 kernel: acpiphp: Slot [50] registered Sep 9 00:45:25.766961 kernel: acpiphp: Slot [51] registered Sep 9 00:45:25.766967 kernel: acpiphp: Slot [52] registered Sep 9 00:45:25.766973 kernel: acpiphp: Slot [53] registered Sep 9 00:45:25.766979 kernel: acpiphp: Slot [54] registered Sep 9 00:45:25.766984 kernel: acpiphp: Slot [55] registered Sep 9 00:45:25.766990 kernel: acpiphp: Slot [56] registered Sep 9 00:45:25.766996 kernel: acpiphp: Slot [57] registered Sep 9 00:45:25.767002 kernel: acpiphp: Slot [58] registered Sep 9 00:45:25.767009 kernel: acpiphp: Slot [59] registered Sep 9 00:45:25.767015 kernel: acpiphp: Slot [60] registered Sep 9 00:45:25.767021 kernel: acpiphp: Slot [61] registered Sep 9 00:45:25.767027 kernel: acpiphp: Slot [62] registered Sep 9 00:45:25.767033 kernel: acpiphp: Slot [63] registered Sep 9 00:45:25.767088 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 9 00:45:25.767140 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 00:45:25.767192 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 00:45:25.767243 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:45:25.767298 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 9 00:45:25.767351 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 9 00:45:25.767403 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 9 00:45:25.767454 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 9 00:45:25.767505 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 9 00:45:25.767626 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Sep 9 00:45:25.767685 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Sep 9 00:45:25.767741 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 9 00:45:25.767794 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 9 00:45:25.767866 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 9 00:45:25.767927 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 00:45:25.770088 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:45:25.770152 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 00:45:25.770223 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 00:45:25.770294 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:45:25.770349 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 00:45:25.770401 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 00:45:25.770455 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:45:25.770511 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:45:25.770570 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 00:45:25.770636 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 00:45:25.770692 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:45:25.770766 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:45:25.770832 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 00:45:25.770888 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:45:25.771093 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:45:25.771170 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 00:45:25.771227 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:45:25.771295 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:45:25.771356 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 00:45:25.771415 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:45:25.771475 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:45:25.771528 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 00:45:25.771581 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:45:25.771638 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:45:25.771689 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 00:45:25.771741 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:45:25.771801 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Sep 9 00:45:25.771857 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Sep 9 00:45:25.771910 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Sep 9 00:45:25.773093 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Sep 9 00:45:25.773156 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Sep 9 00:45:25.773216 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 9 00:45:25.773271 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 9 00:45:25.773326 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 00:45:25.773379 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 00:45:25.773433 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:45:25.773486 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 00:45:25.773538 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 00:45:25.773594 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:45:25.773650 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 00:45:25.773709 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 00:45:25.773763 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:45:25.773819 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:45:25.773871 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 00:45:25.773924 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 00:45:25.773993 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:45:25.774053 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:45:25.774106 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 00:45:25.774159 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:45:25.774215 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:45:25.774279 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 00:45:25.774332 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:45:25.774387 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:45:25.774440 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 00:45:25.774496 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:45:25.774552 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:45:25.774604 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 00:45:25.774657 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:45:25.774712 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:45:25.774765 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 00:45:25.774818 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:45:25.774872 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:45:25.774927 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 00:45:25.775080 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 00:45:25.775135 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:45:25.775190 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:45:25.775244 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 00:45:25.775303 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 00:45:25.775356 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:45:25.775421 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:45:25.775495 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 00:45:25.775560 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 00:45:25.775624 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:45:25.775697 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:45:25.775760 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 00:45:25.775818 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:45:25.775882 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:45:25.779979 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 00:45:25.780076 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:45:25.780140 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:45:25.780211 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 00:45:25.780275 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:45:25.780355 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:45:25.780427 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 00:45:25.780511 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:45:25.780594 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:45:25.780678 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 00:45:25.780752 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:45:25.780825 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:45:25.780900 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 00:45:25.782982 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 00:45:25.783040 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:45:25.783096 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:45:25.783153 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 00:45:25.783220 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 00:45:25.783281 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:45:25.783336 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:45:25.783389 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 00:45:25.783441 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:45:25.783495 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:45:25.783548 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 00:45:25.783604 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:45:25.783659 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:45:25.783725 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 00:45:25.783779 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:45:25.783833 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:45:25.783886 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 00:45:25.783948 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:45:25.784007 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:45:25.784063 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 00:45:25.784115 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:45:25.784169 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:45:25.784221 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 00:45:25.784274 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:45:25.784284 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 9 00:45:25.784290 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 9 00:45:25.784296 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 9 00:45:25.784304 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:45:25.784310 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 9 00:45:25.784316 kernel: iommu: Default domain type: Translated Sep 9 00:45:25.784323 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:45:25.784329 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:45:25.784335 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:45:25.784341 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 9 00:45:25.784347 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 9 00:45:25.784400 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 9 00:45:25.784455 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 9 00:45:25.784506 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:45:25.784515 kernel: vgaarb: loaded Sep 9 00:45:25.784522 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 9 00:45:25.784528 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 9 00:45:25.784534 kernel: clocksource: Switched to clocksource tsc-early Sep 9 00:45:25.784540 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:45:25.784547 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:45:25.784553 kernel: pnp: PnP ACPI init Sep 9 00:45:25.784612 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 9 00:45:25.784662 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 9 00:45:25.784710 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 9 00:45:25.784761 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 9 00:45:25.784820 kernel: pnp 00:06: [dma 2] Sep 9 00:45:25.784876 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 9 00:45:25.787947 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 9 00:45:25.788045 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 9 00:45:25.788057 kernel: pnp: PnP ACPI: found 8 devices Sep 9 00:45:25.788069 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:45:25.788081 kernel: NET: Registered PF_INET protocol family Sep 9 00:45:25.788092 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:45:25.788102 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 00:45:25.788109 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:45:25.788115 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 00:45:25.788125 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 00:45:25.788131 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 00:45:25.788140 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 00:45:25.788149 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 00:45:25.788158 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:45:25.788169 kernel: NET: Registered PF_XDP protocol family Sep 9 00:45:25.788263 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 9 00:45:25.788331 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 00:45:25.788400 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 00:45:25.788479 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 00:45:25.788551 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 00:45:25.788642 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 9 00:45:25.788735 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 9 00:45:25.788816 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 9 00:45:25.788878 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 9 00:45:25.788983 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 9 00:45:25.789057 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 9 00:45:25.789141 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 9 00:45:25.789221 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 9 00:45:25.789281 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 9 00:45:25.789335 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 9 00:45:25.789416 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 9 00:45:25.789507 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 9 00:45:25.789574 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 9 00:45:25.789630 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 9 00:45:25.789687 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 9 00:45:25.789768 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 9 00:45:25.789829 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 9 00:45:25.789908 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 9 00:45:25.792825 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:45:25.792899 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:45:25.793028 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793105 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793172 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793236 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793298 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793354 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793411 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793478 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793554 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793612 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793673 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793756 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793834 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.793898 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.793985 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794046 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794110 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794179 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794253 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794326 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794391 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794445 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794499 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794557 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794625 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794707 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794777 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.794832 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.794889 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795296 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795370 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795443 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795517 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795602 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795659 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795718 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795784 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.795844 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.795913 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796013 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796086 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796156 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796221 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796284 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796349 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796415 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796479 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796542 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796615 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796707 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796781 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.796839 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.796893 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797018 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797097 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797161 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797216 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797308 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797424 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797496 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797570 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797627 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797701 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797781 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.797853 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.797908 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798033 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798115 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798191 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798269 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798324 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798378 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798432 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798485 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798548 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798629 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798694 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798757 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.798816 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.798874 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.800489 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.800555 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.800617 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 9 00:45:25.800675 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 9 00:45:25.800735 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:45:25.800793 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 9 00:45:25.800859 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 00:45:25.800912 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 00:45:25.801003 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:45:25.801075 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Sep 9 00:45:25.801134 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:45:25.801188 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 00:45:25.801242 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 00:45:25.801295 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:45:25.801370 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:45:25.801425 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 00:45:25.801505 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 00:45:25.801571 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:45:25.801633 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:45:25.801694 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 00:45:25.801748 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 00:45:25.801827 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:45:25.801887 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:45:25.802002 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 00:45:25.802058 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:45:25.802117 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:45:25.802171 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 00:45:25.802244 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:45:25.802312 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:45:25.802400 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 00:45:25.802468 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:45:25.802527 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:45:25.802580 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 00:45:25.802633 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:45:25.802688 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:45:25.802775 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 00:45:25.802853 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:45:25.802921 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Sep 9 00:45:25.803005 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:45:25.803059 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 00:45:25.803117 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 00:45:25.803170 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:45:25.803242 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:45:25.803297 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 00:45:25.803363 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 00:45:25.803448 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:45:25.803523 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:45:25.803579 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 00:45:25.803632 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 00:45:25.803686 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:45:25.803762 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:45:25.803824 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 00:45:25.803881 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:45:25.804002 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:45:25.804060 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 00:45:25.804114 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:45:25.804174 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:45:25.804228 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 00:45:25.804302 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:45:25.804372 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:45:25.804425 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 00:45:25.804478 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:45:25.804534 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:45:25.804588 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 00:45:25.804657 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:45:25.804734 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:45:25.804795 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 00:45:25.804849 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 00:45:25.804906 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:45:25.805032 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:45:25.805090 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 00:45:25.805143 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 00:45:25.805227 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:45:25.805294 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:45:25.805348 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 00:45:25.805400 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 00:45:25.805452 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:45:25.805546 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:45:25.805627 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 00:45:25.805691 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:45:25.805748 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:45:25.805801 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 00:45:25.805855 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:45:25.805910 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:45:25.806051 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 00:45:25.806128 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:45:25.806191 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:45:25.806244 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 00:45:25.806301 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:45:25.806355 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:45:25.806407 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 00:45:25.806460 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:45:25.806538 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:45:25.806600 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 00:45:25.806670 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 00:45:25.806743 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:45:25.806807 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:45:25.806865 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 00:45:25.806917 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 00:45:25.807024 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:45:25.807102 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:45:25.807175 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 00:45:25.807236 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:45:25.807291 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:45:25.807345 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 00:45:25.807398 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:45:25.807452 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:45:25.807524 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 00:45:25.807597 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:45:25.807660 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:45:25.807731 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 00:45:25.807786 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:45:25.807840 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:45:25.807894 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 00:45:25.808012 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:45:25.808084 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:45:25.808153 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 00:45:25.808215 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:45:25.808269 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 00:45:25.808317 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 00:45:25.808375 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 00:45:25.808447 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 9 00:45:25.808516 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 9 00:45:25.808572 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 9 00:45:25.808630 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 9 00:45:25.808688 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:45:25.808737 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 00:45:25.808801 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 00:45:25.808857 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 00:45:25.808909 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 9 00:45:25.808974 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 9 00:45:25.809031 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 9 00:45:25.809085 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 9 00:45:25.809150 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:45:25.809205 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 9 00:45:25.809274 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 9 00:45:25.809330 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:45:25.809382 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 9 00:45:25.809434 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 9 00:45:25.809501 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:45:25.809559 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 9 00:45:25.809617 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:45:25.809673 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 9 00:45:25.809732 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:45:25.809787 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 9 00:45:25.809853 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:45:25.809926 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 9 00:45:25.810019 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:45:25.810075 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 9 00:45:25.810132 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:45:25.810189 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 9 00:45:25.810241 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 9 00:45:25.810290 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:45:25.810369 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 9 00:45:25.810428 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 9 00:45:25.810477 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:45:25.810534 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 9 00:45:25.810607 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 9 00:45:25.810684 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:45:25.810745 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 9 00:45:25.810803 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:45:25.810859 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 9 00:45:25.810913 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:45:25.811024 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 9 00:45:25.811083 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:45:25.811144 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 9 00:45:25.811203 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:45:25.811275 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 9 00:45:25.811325 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:45:25.811377 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 9 00:45:25.811431 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 9 00:45:25.811483 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:45:25.811557 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 9 00:45:25.811611 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 9 00:45:25.811659 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:45:25.811719 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 9 00:45:25.811778 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 9 00:45:25.811836 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:45:25.811889 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 9 00:45:25.811971 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:45:25.812031 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 9 00:45:25.812093 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:45:25.812174 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 9 00:45:25.812231 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:45:25.812285 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 9 00:45:25.812338 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:45:25.812391 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 9 00:45:25.812439 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:45:25.812500 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 9 00:45:25.812559 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 9 00:45:25.812629 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:45:25.812684 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 9 00:45:25.812739 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 9 00:45:25.812787 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:45:25.812840 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 9 00:45:25.812892 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:45:25.812980 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 9 00:45:25.813042 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:45:25.813100 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 9 00:45:25.813150 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:45:25.813205 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 9 00:45:25.813262 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:45:25.813315 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 9 00:45:25.813364 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:45:25.813421 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 9 00:45:25.813491 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:45:25.813569 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 00:45:25.813583 kernel: PCI: CLS 32 bytes, default 64 Sep 9 00:45:25.813590 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 00:45:25.813597 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 00:45:25.813603 kernel: clocksource: Switched to clocksource tsc Sep 9 00:45:25.813610 kernel: Initialise system trusted keyrings Sep 9 00:45:25.813616 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 00:45:25.813622 kernel: Key type asymmetric registered Sep 9 00:45:25.813629 kernel: Asymmetric key parser 'x509' registered Sep 9 00:45:25.813635 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 00:45:25.813644 kernel: io scheduler mq-deadline registered Sep 9 00:45:25.813650 kernel: io scheduler kyber registered Sep 9 00:45:25.813656 kernel: io scheduler bfq registered Sep 9 00:45:25.813740 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 9 00:45:25.813813 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.813897 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 9 00:45:25.814000 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814059 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 9 00:45:25.814118 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814186 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 9 00:45:25.814253 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814322 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 9 00:45:25.814385 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814441 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 9 00:45:25.814499 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814556 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 9 00:45:25.814627 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814699 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 9 00:45:25.814776 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.814840 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 9 00:45:25.814895 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815037 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 9 00:45:25.815113 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815178 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 9 00:45:25.815257 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815312 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 9 00:45:25.815370 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815425 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 9 00:45:25.815480 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815533 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 9 00:45:25.815596 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815656 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 9 00:45:25.815747 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815808 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 9 00:45:25.815863 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.815920 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 9 00:45:25.816043 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816126 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 9 00:45:25.816211 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816301 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 9 00:45:25.816378 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816466 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 9 00:45:25.816550 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816612 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 9 00:45:25.816682 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816745 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 9 00:45:25.816799 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.816854 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 9 00:45:25.816907 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817024 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 9 00:45:25.817084 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817138 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 9 00:45:25.817192 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817248 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 9 00:45:25.817322 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817439 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 9 00:45:25.817525 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817581 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 9 00:45:25.817636 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817697 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 9 00:45:25.817756 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817836 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 9 00:45:25.817899 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.817970 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 9 00:45:25.818042 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.818103 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 9 00:45:25.818188 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:45:25.818199 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:45:25.818206 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:45:25.818213 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:45:25.818219 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 9 00:45:25.818225 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:45:25.818232 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:45:25.818289 kernel: rtc_cmos 00:01: registered as rtc0 Sep 9 00:45:25.818342 kernel: rtc_cmos 00:01: setting system clock to 2025-09-09T00:45:25 UTC (1757378725) Sep 9 00:45:25.818392 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 9 00:45:25.818405 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:45:25.818412 kernel: intel_pstate: CPU model not supported Sep 9 00:45:25.818418 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:45:25.818425 kernel: Segment Routing with IPv6 Sep 9 00:45:25.818431 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:45:25.818440 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:45:25.818449 kernel: Key type dns_resolver registered Sep 9 00:45:25.818455 kernel: IPI shorthand broadcast: enabled Sep 9 00:45:25.818466 kernel: sched_clock: Marking stable (964003644, 238433203)->(1269801772, -67364925) Sep 9 00:45:25.818477 kernel: registered taskstats version 1 Sep 9 00:45:25.818488 kernel: Loading compiled-in X.509 certificates Sep 9 00:45:25.818500 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: cc5240ef94b546331b2896cdc739274c03278c51' Sep 9 00:45:25.818510 kernel: Key type .fscrypt registered Sep 9 00:45:25.818518 kernel: Key type fscrypt-provisioning registered Sep 9 00:45:25.818524 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:45:25.818533 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:45:25.818544 kernel: ima: No architecture policies found Sep 9 00:45:25.818554 kernel: clk: Disabling unused clocks Sep 9 00:45:25.818565 kernel: Freeing unused kernel image (initmem) memory: 42880K Sep 9 00:45:25.818577 kernel: Write protecting the kernel read-only data: 36864k Sep 9 00:45:25.818588 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 9 00:45:25.818594 kernel: Run /init as init process Sep 9 00:45:25.818601 kernel: with arguments: Sep 9 00:45:25.818607 kernel: /init Sep 9 00:45:25.818616 kernel: with environment: Sep 9 00:45:25.818622 kernel: HOME=/ Sep 9 00:45:25.818628 kernel: TERM=linux Sep 9 00:45:25.818635 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:45:25.818643 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:45:25.818651 systemd[1]: Detected virtualization vmware. Sep 9 00:45:25.818658 systemd[1]: Detected architecture x86-64. Sep 9 00:45:25.818664 systemd[1]: Running in initrd. Sep 9 00:45:25.818672 systemd[1]: No hostname configured, using default hostname. Sep 9 00:45:25.818679 systemd[1]: Hostname set to . Sep 9 00:45:25.818685 systemd[1]: Initializing machine ID from random generator. Sep 9 00:45:25.818692 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:45:25.818699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:45:25.818709 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:45:25.818717 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:45:25.818724 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:45:25.818732 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:45:25.818738 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:45:25.818746 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:45:25.818753 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:45:25.818760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:45:25.818766 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:45:25.818774 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:45:25.818783 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:45:25.818790 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:45:25.818797 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:45:25.818809 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:45:25.818821 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:45:25.818832 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:45:25.818844 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:45:25.818856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:45:25.818865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:45:25.818871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:45:25.818878 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:45:25.818884 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:45:25.818891 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:45:25.818897 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:45:25.818904 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:45:25.818911 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:45:25.818917 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:45:25.818964 systemd-journald[216]: Collecting audit messages is disabled. Sep 9 00:45:25.818983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:45:25.818994 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:45:25.819003 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:45:25.819013 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:45:25.819020 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:45:25.819027 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:45:25.819036 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:45:25.819045 kernel: Bridge firewalling registered Sep 9 00:45:25.819052 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:45:25.819059 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:45:25.819066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:45:25.819072 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:45:25.819083 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:45:25.819095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:45:25.819108 systemd-journald[216]: Journal started Sep 9 00:45:25.819130 systemd-journald[216]: Runtime Journal (/run/log/journal/953cef3692ca4ad9a8128844f0d447fe) is 4.8M, max 38.6M, 33.8M free. Sep 9 00:45:25.770223 systemd-modules-load[217]: Inserted module 'overlay' Sep 9 00:45:25.800479 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 9 00:45:25.822947 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:45:25.829290 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:45:25.830022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:45:25.832032 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:45:25.834331 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:45:25.837164 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:45:25.842015 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:45:25.843729 dracut-cmdline[245]: dracut-dracut-053 Sep 9 00:45:25.845360 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:45:25.862644 systemd-resolved[253]: Positive Trust Anchors: Sep 9 00:45:25.862655 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:45:25.862686 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:45:25.865540 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 9 00:45:25.866264 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:45:25.866531 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:45:25.902961 kernel: SCSI subsystem initialized Sep 9 00:45:25.912948 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:45:25.919977 kernel: iscsi: registered transport (tcp) Sep 9 00:45:25.935950 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:45:25.935987 kernel: QLogic iSCSI HBA Driver Sep 9 00:45:25.956333 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:45:25.960155 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:45:25.976959 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:45:25.977006 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:45:25.977017 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:45:26.009956 kernel: raid6: avx2x4 gen() 44556 MB/s Sep 9 00:45:26.026953 kernel: raid6: avx2x2 gen() 51058 MB/s Sep 9 00:45:26.044135 kernel: raid6: avx2x1 gen() 42855 MB/s Sep 9 00:45:26.044173 kernel: raid6: using algorithm avx2x2 gen() 51058 MB/s Sep 9 00:45:26.062145 kernel: raid6: .... xor() 30921 MB/s, rmw enabled Sep 9 00:45:26.062185 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:45:26.076965 kernel: xor: automatically using best checksumming function avx Sep 9 00:45:26.181953 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:45:26.187866 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:45:26.192073 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:45:26.202194 systemd-udevd[434]: Using default interface naming scheme 'v255'. Sep 9 00:45:26.205257 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:45:26.213396 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:45:26.222975 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Sep 9 00:45:26.245143 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:45:26.249053 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:45:26.330819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:45:26.339051 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:45:26.348157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:45:26.350147 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:45:26.350336 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:45:26.350598 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:45:26.355074 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:45:26.362690 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:45:26.405111 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 9 00:45:26.405146 kernel: vmw_pvscsi: using 64bit dma Sep 9 00:45:26.414917 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Sep 9 00:45:26.414960 kernel: vmw_pvscsi: max_id: 16 Sep 9 00:45:26.414969 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 9 00:45:26.420992 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 9 00:45:26.421024 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 9 00:45:26.421033 kernel: vmw_pvscsi: using MSI-X Sep 9 00:45:26.424990 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 9 00:45:26.427954 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 9 00:45:26.427991 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:45:26.429948 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 9 00:45:26.432215 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:45:26.435349 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 9 00:45:26.435444 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 9 00:45:26.432286 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:45:26.435481 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:45:26.435589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:45:26.435681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:45:26.435806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:45:26.443309 kernel: AVX2 version of gcm_enc/dec engaged. Sep 9 00:45:26.443338 kernel: AES CTR mode by8 optimization enabled Sep 9 00:45:26.443943 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 9 00:45:26.446362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:45:26.447945 kernel: libata version 3.00 loaded. Sep 9 00:45:26.452335 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 9 00:45:26.452443 kernel: scsi host1: ata_piix Sep 9 00:45:26.452513 kernel: scsi host2: ata_piix Sep 9 00:45:26.455085 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Sep 9 00:45:26.455127 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Sep 9 00:45:26.464897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:45:26.469076 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:45:26.481741 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:45:26.624952 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 9 00:45:26.630987 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 9 00:45:26.643483 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 9 00:45:26.643613 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 9 00:45:26.643680 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 9 00:45:26.645166 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 9 00:45:26.645254 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 9 00:45:26.649953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:45:26.650951 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 9 00:45:26.653266 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 9 00:45:26.653378 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:45:26.666968 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:45:26.724138 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (482) Sep 9 00:45:26.725032 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 9 00:45:26.729002 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 9 00:45:26.731946 kernel: BTRFS: device fsid 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (480) Sep 9 00:45:26.736458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 9 00:45:26.740890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 9 00:45:26.741071 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 9 00:45:26.745029 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:45:26.799963 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:45:26.806145 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:45:27.835958 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:45:27.836308 disk-uuid[590]: The operation has completed successfully. Sep 9 00:45:27.876472 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:45:27.876563 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:45:27.881026 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:45:27.885367 sh[609]: Success Sep 9 00:45:27.900955 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 9 00:45:27.990915 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:45:27.998008 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:45:27.999969 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:45:28.051559 kernel: BTRFS info (device dm-0): first mount of filesystem 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a Sep 9 00:45:28.051600 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:45:28.051610 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:45:28.052814 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:45:28.053776 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:45:28.061958 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 00:45:28.063125 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:45:28.067112 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 9 00:45:28.068763 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:45:28.124727 kernel: BTRFS info (device sda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:28.124772 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:45:28.124791 kernel: BTRFS info (device sda6): using free space tree Sep 9 00:45:28.130955 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:45:28.141148 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:45:28.142064 kernel: BTRFS info (device sda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:28.145686 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:45:28.151126 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:45:28.170975 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 00:45:28.175078 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:45:28.255387 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:45:28.264767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:45:28.275601 systemd-networkd[799]: lo: Link UP Sep 9 00:45:28.275607 systemd-networkd[799]: lo: Gained carrier Sep 9 00:45:28.276354 systemd-networkd[799]: Enumeration completed Sep 9 00:45:28.276626 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 9 00:45:28.277408 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:45:28.277596 systemd[1]: Reached target network.target - Network. Sep 9 00:45:28.280019 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 9 00:45:28.280221 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 9 00:45:28.281351 systemd-networkd[799]: ens192: Link UP Sep 9 00:45:28.281358 systemd-networkd[799]: ens192: Gained carrier Sep 9 00:45:28.286735 ignition[668]: Ignition 2.19.0 Sep 9 00:45:28.287013 ignition[668]: Stage: fetch-offline Sep 9 00:45:28.287148 ignition[668]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:28.287283 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:28.287496 ignition[668]: parsed url from cmdline: "" Sep 9 00:45:28.287533 ignition[668]: no config URL provided Sep 9 00:45:28.287657 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:45:28.287806 ignition[668]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:45:28.288220 ignition[668]: config successfully fetched Sep 9 00:45:28.288248 ignition[668]: parsing config with SHA512: 9bca0ea0b62e526de0392b3c04e01bbc40151c7ed0c17bcb21fe2589aeebabe2b595169904593a9ae657b1a1ff9ff4c6f997f782ada1ece71540e4c00be1111a Sep 9 00:45:28.291155 unknown[668]: fetched base config from "system" Sep 9 00:45:28.291303 unknown[668]: fetched user config from "vmware" Sep 9 00:45:28.291715 ignition[668]: fetch-offline: fetch-offline passed Sep 9 00:45:28.291881 ignition[668]: Ignition finished successfully Sep 9 00:45:28.292708 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:45:28.293260 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:45:28.299097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:45:28.308157 ignition[804]: Ignition 2.19.0 Sep 9 00:45:28.308164 ignition[804]: Stage: kargs Sep 9 00:45:28.308276 ignition[804]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:28.308282 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:28.308945 ignition[804]: kargs: kargs passed Sep 9 00:45:28.308991 ignition[804]: Ignition finished successfully Sep 9 00:45:28.310225 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:45:28.314073 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:45:28.322322 ignition[810]: Ignition 2.19.0 Sep 9 00:45:28.322333 ignition[810]: Stage: disks Sep 9 00:45:28.322442 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:28.322449 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:28.323035 ignition[810]: disks: disks passed Sep 9 00:45:28.323066 ignition[810]: Ignition finished successfully Sep 9 00:45:28.324044 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:45:28.324345 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:45:28.324553 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:45:28.324785 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:45:28.324998 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:45:28.325201 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:45:28.330021 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:45:28.353314 systemd-fsck[819]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 9 00:45:28.354874 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:45:28.359045 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:45:28.430953 kernel: EXT4-fs (sda9): mounted filesystem ee55a213-d578-493d-a79b-e10c399cd35c r/w with ordered data mode. Quota mode: none. Sep 9 00:45:28.430918 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:45:28.431280 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:45:28.438039 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:45:28.439532 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:45:28.439923 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:45:28.440016 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:45:28.440033 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:45:28.444100 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:45:28.444735 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:45:28.450949 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (827) Sep 9 00:45:28.453007 kernel: BTRFS info (device sda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:28.453067 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:45:28.454947 kernel: BTRFS info (device sda6): using free space tree Sep 9 00:45:28.477104 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:45:28.478082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:45:28.497478 initrd-setup-root[851]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:45:28.500470 initrd-setup-root[858]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:45:28.503413 initrd-setup-root[865]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:45:28.505962 initrd-setup-root[872]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:45:28.835929 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:45:28.840035 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:45:28.842477 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:45:28.845966 kernel: BTRFS info (device sda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:28.865920 ignition[939]: INFO : Ignition 2.19.0 Sep 9 00:45:28.865920 ignition[939]: INFO : Stage: mount Sep 9 00:45:28.866339 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:28.866339 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:28.866724 ignition[939]: INFO : mount: mount passed Sep 9 00:45:28.867271 ignition[939]: INFO : Ignition finished successfully Sep 9 00:45:28.867549 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:45:28.871042 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:45:28.937258 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:45:29.049428 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:45:29.054086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:45:29.062134 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (951) Sep 9 00:45:29.064796 kernel: BTRFS info (device sda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:45:29.064829 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:45:29.064841 kernel: BTRFS info (device sda6): using free space tree Sep 9 00:45:29.089981 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:45:29.093672 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:45:29.111307 ignition[968]: INFO : Ignition 2.19.0 Sep 9 00:45:29.111307 ignition[968]: INFO : Stage: files Sep 9 00:45:29.111847 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:29.111847 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:29.112283 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:45:29.112584 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:45:29.112584 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:45:29.130548 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:45:29.130859 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:45:29.131149 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:45:29.131079 unknown[968]: wrote ssh authorized keys file for user: core Sep 9 00:45:29.144812 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:45:29.144812 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 00:45:29.201744 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:45:29.372987 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:45:29.372987 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:45:29.372987 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:45:29.372987 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:45:29.373750 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:45:29.845456 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:45:29.858112 systemd-networkd[799]: ens192: Gained IPv6LL Sep 9 00:45:30.320194 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:45:30.320580 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 00:45:30.320580 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 00:45:30.320580 ignition[968]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:45:30.328533 ignition[968]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:45:30.328705 ignition[968]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:45:30.328705 ignition[968]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:45:30.328705 ignition[968]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:45:30.328705 ignition[968]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:45:30.329286 ignition[968]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:45:30.329286 ignition[968]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:45:30.329286 ignition[968]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:45:30.419685 ignition[968]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:45:30.422371 ignition[968]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:45:30.422576 ignition[968]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:45:30.422576 ignition[968]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:45:30.422576 ignition[968]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:45:30.422576 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:45:30.423271 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:45:30.423271 ignition[968]: INFO : files: files passed Sep 9 00:45:30.423271 ignition[968]: INFO : Ignition finished successfully Sep 9 00:45:30.424590 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:45:30.428017 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:45:30.430012 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:45:30.431105 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:45:30.431164 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:45:30.437591 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:45:30.437591 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:45:30.438179 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:45:30.438833 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:45:30.439248 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:45:30.441023 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:45:30.455159 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:45:30.455225 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:45:30.455496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:45:30.455624 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:45:30.455819 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:45:30.456256 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:45:30.465915 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:45:30.469071 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:45:30.476210 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:45:30.476413 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:45:30.476681 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:45:30.476842 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:45:30.476916 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:45:30.477347 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:45:30.477543 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:45:30.477779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:45:30.477962 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:45:30.478154 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:45:30.478517 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:45:30.478715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:45:30.478929 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:45:30.479146 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:45:30.479359 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:45:30.479520 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:45:30.479583 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:45:30.479850 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:45:30.480105 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:45:30.480335 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:45:30.480378 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:45:30.480531 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:45:30.480593 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:45:30.480862 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:45:30.480926 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:45:30.481166 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:45:30.481312 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:45:30.485955 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:45:30.486138 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:45:30.486357 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:45:30.486516 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:45:30.486565 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:45:30.486722 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:45:30.486769 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:45:30.486955 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:45:30.487019 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:45:30.487259 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:45:30.487319 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:45:30.496071 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:45:30.496419 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:45:30.496523 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:45:30.499127 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:45:30.499242 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:45:30.499322 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:45:30.499488 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:45:30.499582 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:45:30.502653 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:45:30.502711 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:45:30.505653 ignition[1022]: INFO : Ignition 2.19.0 Sep 9 00:45:30.505653 ignition[1022]: INFO : Stage: umount Sep 9 00:45:30.509750 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:45:30.509750 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:45:30.509750 ignition[1022]: INFO : umount: umount passed Sep 9 00:45:30.509750 ignition[1022]: INFO : Ignition finished successfully Sep 9 00:45:30.508102 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:45:30.508163 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:45:30.508373 systemd[1]: Stopped target network.target - Network. Sep 9 00:45:30.508455 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:45:30.508482 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:45:30.508582 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:45:30.508604 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:45:30.508703 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:45:30.508726 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:45:30.508822 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:45:30.508845 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:45:30.509044 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:45:30.509180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:45:30.514322 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:45:30.514531 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:45:30.515598 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:45:30.515808 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:45:30.516471 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:45:30.516498 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:45:30.521069 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:45:30.521175 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:45:30.521212 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:45:30.521349 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 9 00:45:30.521372 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 00:45:30.521485 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:45:30.521507 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:45:30.521608 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:45:30.521628 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:45:30.521730 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:45:30.521750 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:45:30.521911 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:45:30.522610 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:45:30.530758 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:45:30.530977 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:45:30.532327 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:45:30.532401 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:45:30.532616 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:45:30.532639 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:45:30.532745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:45:30.532762 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:45:30.532854 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:45:30.532877 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:45:30.533120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:45:30.533142 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:45:30.533448 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:45:30.533471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:45:30.542073 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:45:30.542436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:45:30.542475 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:45:30.542628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:45:30.542659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:45:30.546163 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:45:30.546242 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:45:30.583549 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:45:30.583627 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:45:30.583897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:45:30.584016 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:45:30.584042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:45:30.588043 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:45:30.599683 systemd[1]: Switching root. Sep 9 00:45:30.624209 systemd-journald[216]: Journal stopped Sep 9 00:45:31.857678 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Sep 9 00:45:31.857700 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:45:31.857709 kernel: SELinux: policy capability open_perms=1 Sep 9 00:45:31.857715 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:45:31.857720 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:45:31.857725 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:45:31.857733 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:45:31.857739 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:45:31.857745 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:45:31.857751 kernel: audit: type=1403 audit(1757378731.250:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:45:31.857757 systemd[1]: Successfully loaded SELinux policy in 34.537ms. Sep 9 00:45:31.857764 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.231ms. Sep 9 00:45:31.857772 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:45:31.857780 systemd[1]: Detected virtualization vmware. Sep 9 00:45:31.857787 systemd[1]: Detected architecture x86-64. Sep 9 00:45:31.857793 systemd[1]: Detected first boot. Sep 9 00:45:31.857800 systemd[1]: Initializing machine ID from random generator. Sep 9 00:45:31.857808 zram_generator::config[1067]: No configuration found. Sep 9 00:45:31.857816 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:45:31.857823 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:45:31.857830 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 9 00:45:31.857837 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:45:31.857844 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:45:31.857850 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:45:31.857859 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:45:31.857866 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:45:31.857873 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:45:31.857880 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:45:31.857887 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:45:31.857894 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:45:31.857901 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:45:31.857909 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:45:31.857916 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:45:31.857923 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:45:31.857930 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:45:31.857943 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:45:31.857961 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:45:31.857969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:45:31.857977 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:45:31.857987 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:45:31.857995 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:45:31.858003 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:45:31.858010 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:45:31.858018 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:45:31.858025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:45:31.858032 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:45:31.858039 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:45:31.858047 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:45:31.858054 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:45:31.858061 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:45:31.858068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:45:31.858076 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:45:31.858084 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:45:31.858091 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:45:31.858098 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:45:31.858106 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:45:31.858113 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:45:31.858120 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:45:31.858128 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:45:31.858135 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:45:31.858144 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:45:31.858152 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:45:31.858159 systemd[1]: Reached target machines.target - Containers. Sep 9 00:45:31.858166 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:45:31.858173 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 9 00:45:31.858181 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:45:31.858188 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:45:31.858195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:45:31.858204 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:45:31.858211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:45:31.858218 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:45:31.858225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:45:31.858233 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:45:31.858240 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:45:31.858247 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:45:31.858254 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:45:31.858262 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:45:31.858270 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:45:31.858278 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:45:31.858285 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:45:31.858292 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:45:31.858300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:45:31.858307 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:45:31.858314 systemd[1]: Stopped verity-setup.service. Sep 9 00:45:31.858321 kernel: loop: module loaded Sep 9 00:45:31.858329 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:45:31.858337 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:45:31.858344 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:45:31.858353 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:45:31.858459 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:45:31.858469 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:45:31.858476 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:45:31.858484 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:45:31.858492 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:45:31.858501 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:45:31.858509 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:45:31.858516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:45:31.858523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:45:31.858530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:45:31.858537 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:45:31.858544 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:45:31.858551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:45:31.858560 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:45:31.858567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:45:31.858575 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:45:31.858582 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:45:31.858589 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:45:31.858597 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:45:31.858604 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:45:31.858611 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:45:31.858618 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:45:31.858627 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:45:31.858634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:45:31.858641 kernel: fuse: init (API version 7.39) Sep 9 00:45:31.858648 kernel: ACPI: bus type drm_connector registered Sep 9 00:45:31.858666 systemd-journald[1157]: Collecting audit messages is disabled. Sep 9 00:45:31.858685 systemd-journald[1157]: Journal started Sep 9 00:45:31.858702 systemd-journald[1157]: Runtime Journal (/run/log/journal/016a0dd781654f878926eb30dc03ee76) is 4.8M, max 38.6M, 33.8M free. Sep 9 00:45:31.637604 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:45:31.654775 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 00:45:31.655052 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:45:31.859218 jq[1134]: true Sep 9 00:45:31.859702 jq[1176]: true Sep 9 00:45:31.878952 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:45:31.880947 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:45:31.887976 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:45:31.888010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:45:31.889524 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:45:31.891834 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:45:31.906320 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:45:31.906361 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:45:31.906779 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:45:31.907965 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:45:31.908224 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:45:31.908315 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:45:31.908490 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:45:31.908720 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:45:31.937174 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:45:31.940873 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:45:31.941756 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:45:31.946799 kernel: loop0: detected capacity change from 0 to 229808 Sep 9 00:45:31.948881 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:45:31.949093 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:45:31.956115 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:45:31.965897 systemd-journald[1157]: Time spent on flushing to /var/log/journal/016a0dd781654f878926eb30dc03ee76 is 54.857ms for 1837 entries. Sep 9 00:45:31.965897 systemd-journald[1157]: System Journal (/var/log/journal/016a0dd781654f878926eb30dc03ee76) is 8.0M, max 584.8M, 576.8M free. Sep 9 00:45:32.028095 systemd-journald[1157]: Received client request to flush runtime journal. Sep 9 00:45:32.028121 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:45:31.965903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:45:31.973468 ignition[1177]: Ignition 2.19.0 Sep 9 00:45:31.992046 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:45:31.973825 ignition[1177]: deleting config from guestinfo properties Sep 9 00:45:31.992467 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:45:31.992949 ignition[1177]: Successfully deleted config Sep 9 00:45:31.994425 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 9 00:45:32.002511 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:45:32.016145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:45:32.029436 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:45:32.048867 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Sep 9 00:45:32.049218 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Sep 9 00:45:32.053021 kernel: loop1: detected capacity change from 0 to 142488 Sep 9 00:45:32.054996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:45:32.065089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:45:32.071093 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:45:32.077874 udevadm[1232]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:45:32.088971 kernel: loop2: detected capacity change from 0 to 2976 Sep 9 00:45:32.130148 kernel: loop3: detected capacity change from 0 to 140768 Sep 9 00:45:32.181952 kernel: loop4: detected capacity change from 0 to 229808 Sep 9 00:45:32.204254 kernel: loop5: detected capacity change from 0 to 142488 Sep 9 00:45:32.236953 kernel: loop6: detected capacity change from 0 to 2976 Sep 9 00:45:32.439022 kernel: loop7: detected capacity change from 0 to 140768 Sep 9 00:45:32.502403 (sd-merge)[1236]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 9 00:45:32.502682 (sd-merge)[1236]: Merged extensions into '/usr'. Sep 9 00:45:32.507750 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:45:32.507807 systemd[1]: Reloading... Sep 9 00:45:32.548951 zram_generator::config[1258]: No configuration found. Sep 9 00:45:32.659110 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:45:32.674667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:45:32.703100 systemd[1]: Reloading finished in 194 ms. Sep 9 00:45:32.708853 ldconfig[1180]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:45:32.719389 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:45:32.720875 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:45:32.722531 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:45:32.728106 systemd[1]: Starting ensure-sysext.service... Sep 9 00:45:32.729033 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:45:32.730031 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:45:32.741968 systemd[1]: Reloading requested from client PID 1319 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:45:32.741981 systemd[1]: Reloading... Sep 9 00:45:32.755641 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Sep 9 00:45:32.756456 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:45:32.756684 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:45:32.757219 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:45:32.757391 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Sep 9 00:45:32.757430 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Sep 9 00:45:32.759439 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:45:32.759447 systemd-tmpfiles[1320]: Skipping /boot Sep 9 00:45:32.766968 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:45:32.766973 systemd-tmpfiles[1320]: Skipping /boot Sep 9 00:45:32.803948 zram_generator::config[1358]: No configuration found. Sep 9 00:45:32.895949 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 9 00:45:32.901091 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:45:32.914042 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1344) Sep 9 00:45:32.919760 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:45:32.938248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:45:32.976903 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:45:32.977075 systemd[1]: Reloading finished in 234 ms. Sep 9 00:45:32.986150 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:45:32.990217 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:45:33.000945 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 9 00:45:33.004546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 9 00:45:33.005387 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:45:33.009271 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:45:33.011689 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:45:33.013068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:45:33.017456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:45:33.019087 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:45:33.019270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:45:33.021923 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:45:33.028726 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 9 00:45:33.029385 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:45:33.031123 kernel: Guest personality initialized and is active Sep 9 00:45:33.031144 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:45:33.031154 kernel: Initialized host personality Sep 9 00:45:33.032523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:45:33.036023 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:45:33.038064 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:45:33.038346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:45:33.039207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:45:33.039297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:45:33.040539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:45:33.040648 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:45:33.041335 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:45:33.041422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:45:33.045995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:45:33.052123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:45:33.054662 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:45:33.057135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:45:33.057280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:45:33.059993 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:45:33.060109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:45:33.062743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:45:33.064283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:45:33.064423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:45:33.065292 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:45:33.077668 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 00:45:33.072203 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:45:33.072405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:45:33.072503 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:45:33.075643 systemd[1]: Finished ensure-sysext.service. Sep 9 00:45:33.082501 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:45:33.084050 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:45:33.084328 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:45:33.084420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:45:33.085720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:45:33.088182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:45:33.088710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:45:33.089153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:45:33.094359 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:45:33.095135 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:45:33.109134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:45:33.120678 (udev-worker)[1354]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 9 00:45:33.123884 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:45:33.126946 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:45:33.167357 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:45:33.169140 augenrules[1485]: No rules Sep 9 00:45:33.173065 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:45:33.174051 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:45:33.186737 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:45:33.194227 systemd-networkd[1441]: lo: Link UP Sep 9 00:45:33.194233 systemd-networkd[1441]: lo: Gained carrier Sep 9 00:45:33.195041 systemd-networkd[1441]: Enumeration completed Sep 9 00:45:33.195093 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:45:33.195254 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:45:33.195276 systemd-networkd[1441]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 9 00:45:33.195381 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:45:33.196384 systemd-timesyncd[1461]: No network connectivity, watching for changes. Sep 9 00:45:33.197633 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 9 00:45:33.197878 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 9 00:45:33.201739 systemd-networkd[1441]: ens192: Link UP Sep 9 00:45:33.201849 systemd-networkd[1441]: ens192: Gained carrier Sep 9 00:45:33.203094 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:45:33.205036 systemd-resolved[1442]: Positive Trust Anchors: Sep 9 00:45:33.205046 systemd-resolved[1442]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:45:33.205069 systemd-resolved[1442]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:45:33.205583 systemd-timesyncd[1461]: Network configuration changed, trying to establish connection. Sep 9 00:45:33.220231 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:45:33.222403 systemd-resolved[1442]: Defaulting to hostname 'linux'. Sep 9 00:45:33.227158 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:45:33.227323 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:45:33.227617 systemd[1]: Reached target network.target - Network. Sep 9 00:45:33.227701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:45:33.246081 lvm[1498]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:47:06.203540 systemd-timesyncd[1461]: Contacted time server 23.155.72.147:123 (1.flatcar.pool.ntp.org). Sep 9 00:47:06.203569 systemd-resolved[1442]: Clock change detected. Flushing caches. Sep 9 00:47:06.203578 systemd-timesyncd[1461]: Initial clock synchronization to Tue 2025-09-09 00:47:06.203476 UTC. Sep 9 00:47:06.230509 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:47:06.231006 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:47:06.234117 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:47:06.237226 lvm[1500]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:47:06.269935 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:47:06.458343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:47:06.466326 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:47:06.466576 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:47:06.466603 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:47:06.466756 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:47:06.466884 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:47:06.467111 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:47:06.467264 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:47:06.467373 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:47:06.467477 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:47:06.467496 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:47:06.467577 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:47:06.468201 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:47:06.469322 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:47:06.476236 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:47:06.476727 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:47:06.476874 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:47:06.476968 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:47:06.477091 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:47:06.477108 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:47:06.477900 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:47:06.479134 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:47:06.481746 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:47:06.484093 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:47:06.484511 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:47:06.486670 jq[1510]: false Sep 9 00:47:06.486727 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:47:06.489576 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:47:06.491354 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:47:06.493131 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:47:06.496201 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:47:06.496514 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:47:06.498006 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:47:06.498418 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:47:06.501099 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:47:06.503689 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 9 00:47:06.507255 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:47:06.507406 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:47:06.520944 extend-filesystems[1511]: Found loop4 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found loop5 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found loop6 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found loop7 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found sda Sep 9 00:47:06.523368 extend-filesystems[1511]: Found sda1 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found sda2 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found sda3 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found usr Sep 9 00:47:06.523368 extend-filesystems[1511]: Found sda4 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found sda6 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found sda7 Sep 9 00:47:06.523368 extend-filesystems[1511]: Found sda9 Sep 9 00:47:06.523368 extend-filesystems[1511]: Checking size of /dev/sda9 Sep 9 00:47:06.527584 update_engine[1517]: I20250909 00:47:06.524540 1517 main.cc:92] Flatcar Update Engine starting Sep 9 00:47:06.531214 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:47:06.531350 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:47:06.531863 jq[1518]: true Sep 9 00:47:06.543535 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 9 00:47:06.548019 jq[1534]: true Sep 9 00:47:06.552072 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 9 00:47:06.552495 (ntainerd)[1540]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:47:06.553216 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:47:06.553344 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:47:06.566860 extend-filesystems[1511]: Old size kept for /dev/sda9 Sep 9 00:47:06.567060 extend-filesystems[1511]: Found sr0 Sep 9 00:47:06.569452 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:47:06.569564 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:47:06.573136 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 9 00:47:06.576950 systemd-logind[1516]: Watching system buttons on /dev/input/event1 (Power Button) Sep 9 00:47:06.576964 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:47:06.580697 systemd-logind[1516]: New seat seat0. Sep 9 00:47:06.581143 tar[1522]: linux-amd64/LICENSE Sep 9 00:47:06.581143 tar[1522]: linux-amd64/helm Sep 9 00:47:06.581382 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:47:06.597056 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1352) Sep 9 00:47:06.615789 unknown[1539]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 9 00:47:06.628118 unknown[1539]: Core dump limit set to -1 Sep 9 00:47:06.633011 dbus-daemon[1509]: [system] SELinux support is enabled Sep 9 00:47:06.634557 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:47:06.635856 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:47:06.635873 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:47:06.636563 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:47:06.636578 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:47:06.639033 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 00:47:06.640423 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:47:06.641823 update_engine[1517]: I20250909 00:47:06.641796 1517 update_check_scheduler.cc:74] Next update check in 5m14s Sep 9 00:47:06.645556 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:47:06.653393 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:47:06.653729 bash[1569]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:47:06.655033 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:47:06.655859 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:47:06.711073 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:47:06.754121 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:47:06.761498 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:47:06.767173 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:47:06.767370 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:47:06.770140 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:47:06.776723 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:47:06.794535 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:47:06.801179 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:47:06.802400 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:47:06.802859 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:47:07.004501 containerd[1540]: time="2025-09-09T00:47:07.004362337Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:47:07.013474 tar[1522]: linux-amd64/README.md Sep 9 00:47:07.021143 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:47:07.022390 containerd[1540]: time="2025-09-09T00:47:07.021939467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:07.022775 containerd[1540]: time="2025-09-09T00:47:07.022752386Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:47:07.022775 containerd[1540]: time="2025-09-09T00:47:07.022771878Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:47:07.022808 containerd[1540]: time="2025-09-09T00:47:07.022781577Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:47:07.022875 containerd[1540]: time="2025-09-09T00:47:07.022861300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:47:07.022900 containerd[1540]: time="2025-09-09T00:47:07.022874946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:07.022927 containerd[1540]: time="2025-09-09T00:47:07.022913828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:47:07.022927 containerd[1540]: time="2025-09-09T00:47:07.022924982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:07.023037 containerd[1540]: time="2025-09-09T00:47:07.023022681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:47:07.023059 containerd[1540]: time="2025-09-09T00:47:07.023035727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:07.023059 containerd[1540]: time="2025-09-09T00:47:07.023046483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:47:07.023059 containerd[1540]: time="2025-09-09T00:47:07.023052533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:07.023104 containerd[1540]: time="2025-09-09T00:47:07.023095078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:07.023238 containerd[1540]: time="2025-09-09T00:47:07.023221974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:07.023304 containerd[1540]: time="2025-09-09T00:47:07.023289357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:47:07.023304 containerd[1540]: time="2025-09-09T00:47:07.023301844Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:47:07.023366 containerd[1540]: time="2025-09-09T00:47:07.023353697Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:47:07.023410 containerd[1540]: time="2025-09-09T00:47:07.023396231Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:47:07.031069 containerd[1540]: time="2025-09-09T00:47:07.031037839Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:47:07.031142 containerd[1540]: time="2025-09-09T00:47:07.031083639Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:47:07.031142 containerd[1540]: time="2025-09-09T00:47:07.031095735Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:47:07.031142 containerd[1540]: time="2025-09-09T00:47:07.031105203Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:47:07.031142 containerd[1540]: time="2025-09-09T00:47:07.031113848Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:47:07.031224 containerd[1540]: time="2025-09-09T00:47:07.031211950Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:47:07.031379 containerd[1540]: time="2025-09-09T00:47:07.031365429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:47:07.031444 containerd[1540]: time="2025-09-09T00:47:07.031431441Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:47:07.031462 containerd[1540]: time="2025-09-09T00:47:07.031445393Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:47:07.031462 containerd[1540]: time="2025-09-09T00:47:07.031455023Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:47:07.031488 containerd[1540]: time="2025-09-09T00:47:07.031463418Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:47:07.031488 containerd[1540]: time="2025-09-09T00:47:07.031476739Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:47:07.031488 containerd[1540]: time="2025-09-09T00:47:07.031485576Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:47:07.031529 containerd[1540]: time="2025-09-09T00:47:07.031496376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:47:07.031529 containerd[1540]: time="2025-09-09T00:47:07.031504973Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:47:07.031529 containerd[1540]: time="2025-09-09T00:47:07.031512341Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:47:07.031529 containerd[1540]: time="2025-09-09T00:47:07.031520098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:47:07.031529 containerd[1540]: time="2025-09-09T00:47:07.031526730Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:47:07.031592 containerd[1540]: time="2025-09-09T00:47:07.031538957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031592 containerd[1540]: time="2025-09-09T00:47:07.031546528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031592 containerd[1540]: time="2025-09-09T00:47:07.031553429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031592 containerd[1540]: time="2025-09-09T00:47:07.031560731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031592 containerd[1540]: time="2025-09-09T00:47:07.031567709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031592 containerd[1540]: time="2025-09-09T00:47:07.031578624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031592 containerd[1540]: time="2025-09-09T00:47:07.031587318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031594913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031602309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031610176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031618358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031626721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031633822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031642227Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031654295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031661108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031681 containerd[1540]: time="2025-09-09T00:47:07.031667284Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:47:07.031805 containerd[1540]: time="2025-09-09T00:47:07.031698794Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:47:07.031805 containerd[1540]: time="2025-09-09T00:47:07.031711342Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:47:07.031805 containerd[1540]: time="2025-09-09T00:47:07.031718055Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:47:07.031805 containerd[1540]: time="2025-09-09T00:47:07.031724499Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:47:07.031805 containerd[1540]: time="2025-09-09T00:47:07.031729823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.031805 containerd[1540]: time="2025-09-09T00:47:07.031736603Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:47:07.031805 containerd[1540]: time="2025-09-09T00:47:07.031745066Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:47:07.031805 containerd[1540]: time="2025-09-09T00:47:07.031791258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:47:07.033651 containerd[1540]: time="2025-09-09T00:47:07.033069386Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:47:07.033651 containerd[1540]: time="2025-09-09T00:47:07.033237014Z" level=info msg="Connect containerd service" Sep 9 00:47:07.033651 containerd[1540]: time="2025-09-09T00:47:07.033280058Z" level=info msg="using legacy CRI server" Sep 9 00:47:07.033651 containerd[1540]: time="2025-09-09T00:47:07.033288671Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:47:07.033651 containerd[1540]: time="2025-09-09T00:47:07.033373561Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:47:07.033918 containerd[1540]: time="2025-09-09T00:47:07.033906376Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:47:07.034056 containerd[1540]: time="2025-09-09T00:47:07.034013761Z" level=info msg="Start subscribing containerd event" Sep 9 00:47:07.034056 containerd[1540]: time="2025-09-09T00:47:07.034044442Z" level=info msg="Start recovering state" Sep 9 00:47:07.034089 containerd[1540]: time="2025-09-09T00:47:07.034080574Z" level=info msg="Start event monitor" Sep 9 00:47:07.034089 containerd[1540]: time="2025-09-09T00:47:07.034087604Z" level=info msg="Start snapshots syncer" Sep 9 00:47:07.034122 containerd[1540]: time="2025-09-09T00:47:07.034092551Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:47:07.034122 containerd[1540]: time="2025-09-09T00:47:07.034097072Z" level=info msg="Start streaming server" Sep 9 00:47:07.034316 containerd[1540]: time="2025-09-09T00:47:07.034306776Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:47:07.034378 containerd[1540]: time="2025-09-09T00:47:07.034369018Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:47:07.034486 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:47:07.034729 containerd[1540]: time="2025-09-09T00:47:07.034660199Z" level=info msg="containerd successfully booted in 0.030752s" Sep 9 00:47:08.120145 systemd-networkd[1441]: ens192: Gained IPv6LL Sep 9 00:47:08.122193 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:47:08.123334 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:47:08.142255 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 9 00:47:08.163352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:47:08.166202 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:47:08.211646 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:47:08.225302 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:47:08.225512 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 9 00:47:08.226238 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:47:09.509262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:47:09.510061 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:47:09.512020 systemd[1]: Startup finished in 1.049s (kernel) + 5.618s (initrd) + 5.344s (userspace) = 12.012s. Sep 9 00:47:09.519318 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:47:09.577437 login[1598]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 00:47:09.579351 login[1599]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 00:47:09.586211 systemd-logind[1516]: New session 2 of user core. Sep 9 00:47:09.586976 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:47:09.593881 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:47:09.597645 systemd-logind[1516]: New session 1 of user core. Sep 9 00:47:09.602290 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:47:09.609240 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:47:09.613090 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:09.682766 systemd[1697]: Queued start job for default target default.target. Sep 9 00:47:09.690045 systemd[1697]: Created slice app.slice - User Application Slice. Sep 9 00:47:09.690147 systemd[1697]: Reached target paths.target - Paths. Sep 9 00:47:09.690207 systemd[1697]: Reached target timers.target - Timers. Sep 9 00:47:09.693099 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:47:09.699218 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:47:09.699696 systemd[1697]: Reached target sockets.target - Sockets. Sep 9 00:47:09.699781 systemd[1697]: Reached target basic.target - Basic System. Sep 9 00:47:09.699869 systemd[1697]: Reached target default.target - Main User Target. Sep 9 00:47:09.699927 systemd[1697]: Startup finished in 82ms. Sep 9 00:47:09.700229 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:47:09.708150 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:47:09.709622 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:47:10.558529 kubelet[1689]: E0909 00:47:10.558455 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:47:10.560203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:47:10.560309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:47:20.810533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:47:20.821164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:47:20.954453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:47:20.957819 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:47:20.997389 kubelet[1741]: E0909 00:47:20.997346 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:47:21.000096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:47:21.000197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:47:31.250490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:47:31.259115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:47:31.597225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:47:31.599739 (kubelet)[1756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:47:31.667036 kubelet[1756]: E0909 00:47:31.666980 1756 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:47:31.668538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:47:31.668643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:47:36.874985 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:47:36.875924 systemd[1]: Started sshd@0-139.178.70.101:22-139.178.89.65:35102.service - OpenSSH per-connection server daemon (139.178.89.65:35102). Sep 9 00:47:36.906497 sshd[1764]: Accepted publickey for core from 139.178.89.65 port 35102 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:47:36.907340 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:47:36.909693 systemd-logind[1516]: New session 3 of user core. Sep 9 00:47:36.919400 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:47:36.974045 systemd[1]: Started sshd@1-139.178.70.101:22-139.178.89.65:35110.service - OpenSSH per-connection server daemon (139.178.89.65:35110). Sep 9 00:47:36.998588 sshd[1769]: Accepted publickey for core from 139.178.89.65 port 35110 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:47:36.998542 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:47:37.003149 systemd-logind[1516]: New session 4 of user core. Sep 9 00:47:37.006158 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:47:37.059051 sshd[1769]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:37.068513 systemd[1]: sshd@1-139.178.70.101:22-139.178.89.65:35110.service: Deactivated successfully. Sep 9 00:47:37.069448 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:47:37.069837 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:47:37.071351 systemd[1]: Started sshd@2-139.178.70.101:22-139.178.89.65:35114.service - OpenSSH per-connection server daemon (139.178.89.65:35114). Sep 9 00:47:37.071863 systemd-logind[1516]: Removed session 4. Sep 9 00:47:37.095647 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 35114 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:47:37.096399 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:47:37.098742 systemd-logind[1516]: New session 5 of user core. Sep 9 00:47:37.107096 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:47:37.153548 sshd[1776]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:37.162488 systemd[1]: sshd@2-139.178.70.101:22-139.178.89.65:35114.service: Deactivated successfully. Sep 9 00:47:37.163364 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:47:37.164147 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:47:37.165178 systemd[1]: Started sshd@3-139.178.70.101:22-139.178.89.65:35120.service - OpenSSH per-connection server daemon (139.178.89.65:35120). Sep 9 00:47:37.166517 systemd-logind[1516]: Removed session 5. Sep 9 00:47:37.188964 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 35120 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:47:37.189748 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:47:37.192082 systemd-logind[1516]: New session 6 of user core. Sep 9 00:47:37.200071 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:47:37.248523 sshd[1783]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:37.253316 systemd[1]: sshd@3-139.178.70.101:22-139.178.89.65:35120.service: Deactivated successfully. Sep 9 00:47:37.254080 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:47:37.254774 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:47:37.255445 systemd[1]: Started sshd@4-139.178.70.101:22-139.178.89.65:35124.service - OpenSSH per-connection server daemon (139.178.89.65:35124). Sep 9 00:47:37.257150 systemd-logind[1516]: Removed session 6. Sep 9 00:47:37.282900 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 35124 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:47:37.283638 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:47:37.286779 systemd-logind[1516]: New session 7 of user core. Sep 9 00:47:37.292074 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:47:37.346390 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:47:37.346552 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:47:37.356400 sudo[1793]: pam_unix(sudo:session): session closed for user root Sep 9 00:47:37.357302 sshd[1790]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:37.370610 systemd[1]: sshd@4-139.178.70.101:22-139.178.89.65:35124.service: Deactivated successfully. Sep 9 00:47:37.371452 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:47:37.372209 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:47:37.376149 systemd[1]: Started sshd@5-139.178.70.101:22-139.178.89.65:35126.service - OpenSSH per-connection server daemon (139.178.89.65:35126). Sep 9 00:47:37.377031 systemd-logind[1516]: Removed session 7. Sep 9 00:47:37.397740 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 35126 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:47:37.398528 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:47:37.402007 systemd-logind[1516]: New session 8 of user core. Sep 9 00:47:37.405075 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:47:37.453514 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:47:37.453891 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:47:37.455816 sudo[1802]: pam_unix(sudo:session): session closed for user root Sep 9 00:47:37.458747 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:47:37.458901 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:47:37.468148 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 00:47:37.469475 auditctl[1805]: No rules Sep 9 00:47:37.469929 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:47:37.470094 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 00:47:37.471458 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:47:37.487492 augenrules[1823]: No rules Sep 9 00:47:37.487979 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:47:37.489098 sudo[1801]: pam_unix(sudo:session): session closed for user root Sep 9 00:47:37.489975 sshd[1798]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:37.494361 systemd[1]: sshd@5-139.178.70.101:22-139.178.89.65:35126.service: Deactivated successfully. Sep 9 00:47:37.495180 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:47:37.495915 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:47:37.497046 systemd[1]: Started sshd@6-139.178.70.101:22-139.178.89.65:35128.service - OpenSSH per-connection server daemon (139.178.89.65:35128). Sep 9 00:47:37.498278 systemd-logind[1516]: Removed session 8. Sep 9 00:47:37.521301 sshd[1831]: Accepted publickey for core from 139.178.89.65 port 35128 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:47:37.522022 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:47:37.524324 systemd-logind[1516]: New session 9 of user core. Sep 9 00:47:37.538084 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:47:37.585193 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:47:37.585355 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:47:37.968146 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:47:37.968241 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:47:38.279769 dockerd[1851]: time="2025-09-09T00:47:38.279729243Z" level=info msg="Starting up" Sep 9 00:47:38.348070 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1535953277-merged.mount: Deactivated successfully. Sep 9 00:47:38.374540 systemd[1]: var-lib-docker-metacopy\x2dcheck2618110106-merged.mount: Deactivated successfully. Sep 9 00:47:38.390459 dockerd[1851]: time="2025-09-09T00:47:38.390427676Z" level=info msg="Loading containers: start." Sep 9 00:47:38.471003 kernel: Initializing XFRM netlink socket Sep 9 00:47:38.547411 systemd-networkd[1441]: docker0: Link UP Sep 9 00:47:38.558976 dockerd[1851]: time="2025-09-09T00:47:38.558687840Z" level=info msg="Loading containers: done." Sep 9 00:47:38.570356 dockerd[1851]: time="2025-09-09T00:47:38.570134716Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:47:38.570356 dockerd[1851]: time="2025-09-09T00:47:38.570191841Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 00:47:38.570356 dockerd[1851]: time="2025-09-09T00:47:38.570243696Z" level=info msg="Daemon has completed initialization" Sep 9 00:47:38.583213 dockerd[1851]: time="2025-09-09T00:47:38.583183994Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:47:38.583408 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:47:39.593983 containerd[1540]: time="2025-09-09T00:47:39.593905170Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:47:40.297681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203689390.mount: Deactivated successfully. Sep 9 00:47:41.202653 containerd[1540]: time="2025-09-09T00:47:41.202609951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:41.203249 containerd[1540]: time="2025-09-09T00:47:41.203224304Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 00:47:41.204002 containerd[1540]: time="2025-09-09T00:47:41.203459502Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:41.205382 containerd[1540]: time="2025-09-09T00:47:41.205278751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:41.206683 containerd[1540]: time="2025-09-09T00:47:41.206362547Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 1.612434075s" Sep 9 00:47:41.206683 containerd[1540]: time="2025-09-09T00:47:41.206388886Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 00:47:41.206973 containerd[1540]: time="2025-09-09T00:47:41.206957314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:47:41.745589 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:47:41.752182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:47:41.841341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:47:41.843334 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:47:41.909461 kubelet[2052]: E0909 00:47:41.909411 2052 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:47:41.910902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:47:41.911058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:47:43.568846 containerd[1540]: time="2025-09-09T00:47:43.568788087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:43.569621 containerd[1540]: time="2025-09-09T00:47:43.569463752Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 00:47:43.569846 containerd[1540]: time="2025-09-09T00:47:43.569778318Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:43.572651 containerd[1540]: time="2025-09-09T00:47:43.572062861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:43.573146 containerd[1540]: time="2025-09-09T00:47:43.573122966Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 2.366048832s" Sep 9 00:47:43.573211 containerd[1540]: time="2025-09-09T00:47:43.573146590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 00:47:43.573509 containerd[1540]: time="2025-09-09T00:47:43.573483009Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:47:44.739613 containerd[1540]: time="2025-09-09T00:47:44.739040911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:44.739613 containerd[1540]: time="2025-09-09T00:47:44.739395186Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 00:47:44.741419 containerd[1540]: time="2025-09-09T00:47:44.741396338Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:44.744947 containerd[1540]: time="2025-09-09T00:47:44.744929836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:44.745600 containerd[1540]: time="2025-09-09T00:47:44.745522477Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.172019839s" Sep 9 00:47:44.745600 containerd[1540]: time="2025-09-09T00:47:44.745540250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 00:47:44.745857 containerd[1540]: time="2025-09-09T00:47:44.745842421Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:47:45.946561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844591843.mount: Deactivated successfully. Sep 9 00:47:46.561103 containerd[1540]: time="2025-09-09T00:47:46.561055689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:46.565640 containerd[1540]: time="2025-09-09T00:47:46.565585708Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 00:47:46.570255 containerd[1540]: time="2025-09-09T00:47:46.570218188Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:46.576934 containerd[1540]: time="2025-09-09T00:47:46.576903980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:46.577798 containerd[1540]: time="2025-09-09T00:47:46.577660579Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 1.831792689s" Sep 9 00:47:46.577798 containerd[1540]: time="2025-09-09T00:47:46.577697030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 00:47:46.578295 containerd[1540]: time="2025-09-09T00:47:46.578211765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:47:47.372102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985974073.mount: Deactivated successfully. Sep 9 00:47:48.672015 containerd[1540]: time="2025-09-09T00:47:48.671665012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:48.676751 containerd[1540]: time="2025-09-09T00:47:48.676723633Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 00:47:48.687151 containerd[1540]: time="2025-09-09T00:47:48.687108112Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:48.689689 containerd[1540]: time="2025-09-09T00:47:48.689293317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:48.690584 containerd[1540]: time="2025-09-09T00:47:48.690560127Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.112326677s" Sep 9 00:47:48.690627 containerd[1540]: time="2025-09-09T00:47:48.690587162Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 00:47:48.691480 containerd[1540]: time="2025-09-09T00:47:48.691272659Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:47:49.808318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073070298.mount: Deactivated successfully. Sep 9 00:47:49.829519 containerd[1540]: time="2025-09-09T00:47:49.829470411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:49.830170 containerd[1540]: time="2025-09-09T00:47:49.830003796Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:47:49.831567 containerd[1540]: time="2025-09-09T00:47:49.830353047Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:49.833805 containerd[1540]: time="2025-09-09T00:47:49.833782192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:49.834291 containerd[1540]: time="2025-09-09T00:47:49.834272213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.142979564s" Sep 9 00:47:49.834354 containerd[1540]: time="2025-09-09T00:47:49.834344435Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:47:49.834680 containerd[1540]: time="2025-09-09T00:47:49.834665686Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:47:50.484504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262386820.mount: Deactivated successfully. Sep 9 00:47:51.504952 update_engine[1517]: I20250909 00:47:51.504475 1517 update_attempter.cc:509] Updating boot flags... Sep 9 00:47:51.538477 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2189) Sep 9 00:47:51.569170 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2188) Sep 9 00:47:51.995794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 00:47:52.003113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:47:52.717444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:47:52.722275 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:47:52.930447 kubelet[2210]: E0909 00:47:52.930380 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:47:52.931932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:47:52.932029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:47:53.247013 containerd[1540]: time="2025-09-09T00:47:53.246532083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:53.269857 containerd[1540]: time="2025-09-09T00:47:53.269804801Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 00:47:53.285354 containerd[1540]: time="2025-09-09T00:47:53.285298120Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:53.344970 containerd[1540]: time="2025-09-09T00:47:53.344924926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:47:53.345978 containerd[1540]: time="2025-09-09T00:47:53.345795487Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.511112723s" Sep 9 00:47:53.345978 containerd[1540]: time="2025-09-09T00:47:53.345819280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 00:47:55.361263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:47:55.366198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:47:55.386507 systemd[1]: Reloading requested from client PID 2246 ('systemctl') (unit session-9.scope)... Sep 9 00:47:55.386606 systemd[1]: Reloading... Sep 9 00:47:55.470012 zram_generator::config[2293]: No configuration found. Sep 9 00:47:55.522325 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:47:55.537720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:47:55.582609 systemd[1]: Reloading finished in 195 ms. Sep 9 00:47:55.736909 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:47:55.737023 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:47:55.737247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:47:55.741245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:47:55.970231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:47:55.977288 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:47:56.040010 kubelet[2353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:47:56.040010 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:47:56.040010 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:47:56.062213 kubelet[2353]: I0909 00:47:56.061889 2353 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:47:56.371546 kubelet[2353]: I0909 00:47:56.371203 2353 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:47:56.371546 kubelet[2353]: I0909 00:47:56.371222 2353 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:47:56.371546 kubelet[2353]: I0909 00:47:56.371396 2353 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:47:56.416038 kubelet[2353]: I0909 00:47:56.415393 2353 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:47:56.416038 kubelet[2353]: E0909 00:47:56.415961 2353 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:47:56.430649 kubelet[2353]: E0909 00:47:56.430624 2353 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:47:56.430870 kubelet[2353]: I0909 00:47:56.430689 2353 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:47:56.435915 kubelet[2353]: I0909 00:47:56.435878 2353 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:47:56.439429 kubelet[2353]: I0909 00:47:56.439298 2353 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:47:56.442006 kubelet[2353]: I0909 00:47:56.439320 2353 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:47:56.442767 kubelet[2353]: I0909 00:47:56.442751 2353 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:47:56.442767 kubelet[2353]: I0909 00:47:56.442766 2353 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:47:56.443870 kubelet[2353]: I0909 00:47:56.443856 2353 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:47:56.446504 kubelet[2353]: I0909 00:47:56.446227 2353 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:47:56.446504 kubelet[2353]: I0909 00:47:56.446248 2353 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:47:56.447099 kubelet[2353]: I0909 00:47:56.446733 2353 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:47:56.447099 kubelet[2353]: I0909 00:47:56.446748 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:47:56.454245 kubelet[2353]: E0909 00:47:56.453981 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:47:56.460399 kubelet[2353]: I0909 00:47:56.460383 2353 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:47:56.461047 kubelet[2353]: E0909 00:47:56.460926 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:47:56.461113 kubelet[2353]: I0909 00:47:56.461101 2353 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:47:56.462416 kubelet[2353]: W0909 00:47:56.461704 2353 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:47:56.465432 kubelet[2353]: I0909 00:47:56.465416 2353 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:47:56.465551 kubelet[2353]: I0909 00:47:56.465545 2353 server.go:1289] "Started kubelet" Sep 9 00:47:56.468082 kubelet[2353]: I0909 00:47:56.468028 2353 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:47:56.469526 kubelet[2353]: I0909 00:47:56.468841 2353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:47:56.469526 kubelet[2353]: I0909 00:47:56.469126 2353 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:47:56.469996 kubelet[2353]: I0909 00:47:56.469976 2353 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:47:56.472133 kubelet[2353]: I0909 00:47:56.472118 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:47:56.476307 kubelet[2353]: E0909 00:47:56.473099 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.101:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186376c6f380afc3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:47:56.465524675 +0000 UTC m=+0.485038422,LastTimestamp:2025-09-09 00:47:56.465524675 +0000 UTC m=+0.485038422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:47:56.476307 kubelet[2353]: I0909 00:47:56.475629 2353 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:47:56.482058 kubelet[2353]: I0909 00:47:56.482040 2353 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:47:56.482136 kubelet[2353]: E0909 00:47:56.482109 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:47:56.482157 kubelet[2353]: I0909 00:47:56.482140 2353 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:47:56.482174 kubelet[2353]: I0909 00:47:56.482169 2353 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:47:56.482553 kubelet[2353]: E0909 00:47:56.482536 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:47:56.482594 kubelet[2353]: E0909 00:47:56.482578 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="200ms" Sep 9 00:47:56.482884 kubelet[2353]: I0909 00:47:56.482871 2353 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:47:56.482928 kubelet[2353]: I0909 00:47:56.482917 2353 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:47:56.485195 kubelet[2353]: I0909 00:47:56.485177 2353 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:47:56.488941 kubelet[2353]: I0909 00:47:56.488867 2353 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:47:56.490574 kubelet[2353]: I0909 00:47:56.490400 2353 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:47:56.490574 kubelet[2353]: I0909 00:47:56.490411 2353 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:47:56.490574 kubelet[2353]: I0909 00:47:56.490424 2353 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:47:56.490574 kubelet[2353]: I0909 00:47:56.490429 2353 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:47:56.490574 kubelet[2353]: E0909 00:47:56.490460 2353 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:47:56.494331 kubelet[2353]: E0909 00:47:56.494316 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:47:56.494624 kubelet[2353]: E0909 00:47:56.494614 2353 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:47:56.504369 kubelet[2353]: I0909 00:47:56.504353 2353 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:47:56.504369 kubelet[2353]: I0909 00:47:56.504365 2353 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:47:56.504470 kubelet[2353]: I0909 00:47:56.504379 2353 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:47:56.505525 kubelet[2353]: I0909 00:47:56.505513 2353 policy_none.go:49] "None policy: Start" Sep 9 00:47:56.505567 kubelet[2353]: I0909 00:47:56.505531 2353 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:47:56.505567 kubelet[2353]: I0909 00:47:56.505541 2353 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:47:56.510656 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:47:56.519140 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:47:56.521288 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:47:56.532738 kubelet[2353]: E0909 00:47:56.532717 2353 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:47:56.532852 kubelet[2353]: I0909 00:47:56.532841 2353 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:47:56.532875 kubelet[2353]: I0909 00:47:56.532852 2353 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:47:56.533286 kubelet[2353]: I0909 00:47:56.533221 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:47:56.533931 kubelet[2353]: E0909 00:47:56.533916 2353 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:47:56.533973 kubelet[2353]: E0909 00:47:56.533939 2353 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:47:56.598260 systemd[1]: Created slice kubepods-burstable-podd134483e73b0ed2684fee962c9e47783.slice - libcontainer container kubepods-burstable-podd134483e73b0ed2684fee962c9e47783.slice. Sep 9 00:47:56.613301 kubelet[2353]: E0909 00:47:56.613279 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:47:56.615067 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:47:56.616405 kubelet[2353]: E0909 00:47:56.616314 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:47:56.617770 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:47:56.618841 kubelet[2353]: E0909 00:47:56.618829 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:47:56.635662 kubelet[2353]: I0909 00:47:56.634794 2353 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:47:56.635662 kubelet[2353]: E0909 00:47:56.635025 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 9 00:47:56.684084 kubelet[2353]: E0909 00:47:56.684059 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="400ms" Sep 9 00:47:56.783543 kubelet[2353]: I0909 00:47:56.783463 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d134483e73b0ed2684fee962c9e47783-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d134483e73b0ed2684fee962c9e47783\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:47:56.783543 kubelet[2353]: I0909 00:47:56.783489 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d134483e73b0ed2684fee962c9e47783-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d134483e73b0ed2684fee962c9e47783\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:47:56.783543 kubelet[2353]: I0909 00:47:56.783503 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:47:56.783543 kubelet[2353]: I0909 00:47:56.783519 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:47:56.783543 kubelet[2353]: I0909 00:47:56.783543 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:47:56.783702 kubelet[2353]: I0909 00:47:56.783561 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:47:56.783702 kubelet[2353]: I0909 00:47:56.783579 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d134483e73b0ed2684fee962c9e47783-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d134483e73b0ed2684fee962c9e47783\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:47:56.783702 kubelet[2353]: I0909 00:47:56.783591 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:47:56.783702 kubelet[2353]: I0909 00:47:56.783601 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:47:56.836914 kubelet[2353]: I0909 00:47:56.836695 2353 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:47:56.836914 kubelet[2353]: E0909 00:47:56.836867 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 9 00:47:56.914038 containerd[1540]: time="2025-09-09T00:47:56.913957397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d134483e73b0ed2684fee962c9e47783,Namespace:kube-system,Attempt:0,}" Sep 9 00:47:56.923043 containerd[1540]: time="2025-09-09T00:47:56.922795001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:47:56.923238 containerd[1540]: time="2025-09-09T00:47:56.923097396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:47:57.084691 kubelet[2353]: E0909 00:47:57.084654 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="800ms" Sep 9 00:47:57.238301 kubelet[2353]: I0909 00:47:57.238230 2353 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:47:57.238758 kubelet[2353]: E0909 00:47:57.238729 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 9 00:47:57.381687 kubelet[2353]: E0909 00:47:57.381651 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:47:57.393165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204133894.mount: Deactivated successfully. Sep 9 00:47:57.394224 containerd[1540]: time="2025-09-09T00:47:57.394205637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:47:57.395425 containerd[1540]: time="2025-09-09T00:47:57.395358215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 9 00:47:57.395765 containerd[1540]: time="2025-09-09T00:47:57.395752342Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:47:57.397194 containerd[1540]: time="2025-09-09T00:47:57.397173786Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:47:57.398136 containerd[1540]: time="2025-09-09T00:47:57.397981270Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:47:57.398478 containerd[1540]: time="2025-09-09T00:47:57.398466765Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:47:57.398936 containerd[1540]: time="2025-09-09T00:47:57.398920789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:47:57.400079 containerd[1540]: time="2025-09-09T00:47:57.400028521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:47:57.400520 containerd[1540]: time="2025-09-09T00:47:57.400502432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.646376ms" Sep 9 00:47:57.402187 containerd[1540]: time="2025-09-09T00:47:57.402171835Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.130832ms" Sep 9 00:47:57.403794 containerd[1540]: time="2025-09-09T00:47:57.403600110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.471497ms" Sep 9 00:47:57.519258 containerd[1540]: time="2025-09-09T00:47:57.518981878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:47:57.519830 containerd[1540]: time="2025-09-09T00:47:57.519543248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:47:57.519830 containerd[1540]: time="2025-09-09T00:47:57.519567695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:47:57.519830 containerd[1540]: time="2025-09-09T00:47:57.519583216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:57.519830 containerd[1540]: time="2025-09-09T00:47:57.519472472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:47:57.519830 containerd[1540]: time="2025-09-09T00:47:57.519484988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:57.519830 containerd[1540]: time="2025-09-09T00:47:57.519552824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:57.520194 containerd[1540]: time="2025-09-09T00:47:57.520118442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:57.525550 containerd[1540]: time="2025-09-09T00:47:57.525340642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:47:57.525550 containerd[1540]: time="2025-09-09T00:47:57.525382083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:47:57.525550 containerd[1540]: time="2025-09-09T00:47:57.525392180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:57.525550 containerd[1540]: time="2025-09-09T00:47:57.525441801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:57.535636 systemd[1]: Started cri-containerd-18b76f23940c4219ef73bc8f41c17e6704b54c1353a89c5b9508b84b31764b17.scope - libcontainer container 18b76f23940c4219ef73bc8f41c17e6704b54c1353a89c5b9508b84b31764b17. Sep 9 00:47:57.541362 systemd[1]: Started cri-containerd-661d692e95fb81beb8d98be29742477134d6198a1fa6b59dc8e56c0af9adf945.scope - libcontainer container 661d692e95fb81beb8d98be29742477134d6198a1fa6b59dc8e56c0af9adf945. Sep 9 00:47:57.548040 systemd[1]: Started cri-containerd-fe07227819ae1266af7657fe419ac443aa1a2fdb6059d45018f349ea18720193.scope - libcontainer container fe07227819ae1266af7657fe419ac443aa1a2fdb6059d45018f349ea18720193. Sep 9 00:47:57.595143 kubelet[2353]: E0909 00:47:57.595112 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:47:57.607710 containerd[1540]: time="2025-09-09T00:47:57.607285712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe07227819ae1266af7657fe419ac443aa1a2fdb6059d45018f349ea18720193\"" Sep 9 00:47:57.611945 containerd[1540]: time="2025-09-09T00:47:57.611453479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d134483e73b0ed2684fee962c9e47783,Namespace:kube-system,Attempt:0,} returns sandbox id \"661d692e95fb81beb8d98be29742477134d6198a1fa6b59dc8e56c0af9adf945\"" Sep 9 00:47:57.614005 containerd[1540]: time="2025-09-09T00:47:57.613975524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"18b76f23940c4219ef73bc8f41c17e6704b54c1353a89c5b9508b84b31764b17\"" Sep 9 00:47:57.645407 kubelet[2353]: E0909 00:47:57.645379 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:47:57.657557 containerd[1540]: time="2025-09-09T00:47:57.657518608Z" level=info msg="CreateContainer within sandbox \"fe07227819ae1266af7657fe419ac443aa1a2fdb6059d45018f349ea18720193\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:47:57.821197 containerd[1540]: time="2025-09-09T00:47:57.821109285Z" level=info msg="CreateContainer within sandbox \"661d692e95fb81beb8d98be29742477134d6198a1fa6b59dc8e56c0af9adf945\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:47:57.838446 containerd[1540]: time="2025-09-09T00:47:57.838403541Z" level=info msg="CreateContainer within sandbox \"18b76f23940c4219ef73bc8f41c17e6704b54c1353a89c5b9508b84b31764b17\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:47:57.883474 containerd[1540]: time="2025-09-09T00:47:57.883367599Z" level=info msg="CreateContainer within sandbox \"fe07227819ae1266af7657fe419ac443aa1a2fdb6059d45018f349ea18720193\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2293952ab29df5f20d4002ca58b5f4ac4d92807ae0fa5990167a83b28ff9e58b\"" Sep 9 00:47:57.883474 containerd[1540]: time="2025-09-09T00:47:57.883463578Z" level=info msg="CreateContainer within sandbox \"661d692e95fb81beb8d98be29742477134d6198a1fa6b59dc8e56c0af9adf945\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"def3ac3736a7eb2591ca136622780f8a12c08272651236d95c260ea24dc6ed24\"" Sep 9 00:47:57.884671 containerd[1540]: time="2025-09-09T00:47:57.883832588Z" level=info msg="CreateContainer within sandbox \"18b76f23940c4219ef73bc8f41c17e6704b54c1353a89c5b9508b84b31764b17\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c28483f5da6c377a8266fc7b347ba5f325ee8157203cf5d08b9815e2b1c016d9\"" Sep 9 00:47:57.884671 containerd[1540]: time="2025-09-09T00:47:57.884029212Z" level=info msg="StartContainer for \"2293952ab29df5f20d4002ca58b5f4ac4d92807ae0fa5990167a83b28ff9e58b\"" Sep 9 00:47:57.884671 containerd[1540]: time="2025-09-09T00:47:57.884038822Z" level=info msg="StartContainer for \"def3ac3736a7eb2591ca136622780f8a12c08272651236d95c260ea24dc6ed24\"" Sep 9 00:47:57.885007 kubelet[2353]: E0909 00:47:57.884964 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="1.6s" Sep 9 00:47:57.885097 containerd[1540]: time="2025-09-09T00:47:57.885082378Z" level=info msg="StartContainer for \"c28483f5da6c377a8266fc7b347ba5f325ee8157203cf5d08b9815e2b1c016d9\"" Sep 9 00:47:57.907129 systemd[1]: Started cri-containerd-2293952ab29df5f20d4002ca58b5f4ac4d92807ae0fa5990167a83b28ff9e58b.scope - libcontainer container 2293952ab29df5f20d4002ca58b5f4ac4d92807ae0fa5990167a83b28ff9e58b. Sep 9 00:47:57.910072 systemd[1]: Started cri-containerd-def3ac3736a7eb2591ca136622780f8a12c08272651236d95c260ea24dc6ed24.scope - libcontainer container def3ac3736a7eb2591ca136622780f8a12c08272651236d95c260ea24dc6ed24. Sep 9 00:47:57.913200 systemd[1]: Started cri-containerd-c28483f5da6c377a8266fc7b347ba5f325ee8157203cf5d08b9815e2b1c016d9.scope - libcontainer container c28483f5da6c377a8266fc7b347ba5f325ee8157203cf5d08b9815e2b1c016d9. Sep 9 00:47:57.947002 containerd[1540]: time="2025-09-09T00:47:57.946896510Z" level=info msg="StartContainer for \"def3ac3736a7eb2591ca136622780f8a12c08272651236d95c260ea24dc6ed24\" returns successfully" Sep 9 00:47:57.958583 containerd[1540]: time="2025-09-09T00:47:57.958556962Z" level=info msg="StartContainer for \"2293952ab29df5f20d4002ca58b5f4ac4d92807ae0fa5990167a83b28ff9e58b\" returns successfully" Sep 9 00:47:57.976599 containerd[1540]: time="2025-09-09T00:47:57.976574594Z" level=info msg="StartContainer for \"c28483f5da6c377a8266fc7b347ba5f325ee8157203cf5d08b9815e2b1c016d9\" returns successfully" Sep 9 00:47:58.040644 kubelet[2353]: I0909 00:47:58.040429 2353 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:47:58.040644 kubelet[2353]: E0909 00:47:58.040627 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 9 00:47:58.091480 kubelet[2353]: E0909 00:47:58.091408 2353 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:47:58.512004 kubelet[2353]: E0909 00:47:58.511412 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:47:58.517938 kubelet[2353]: E0909 00:47:58.517785 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:47:58.519126 kubelet[2353]: E0909 00:47:58.519013 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:47:58.578705 kubelet[2353]: E0909 00:47:58.578650 2353 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:47:59.521373 kubelet[2353]: E0909 00:47:59.521353 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:47:59.521843 kubelet[2353]: E0909 00:47:59.521769 2353 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:47:59.642427 kubelet[2353]: I0909 00:47:59.642265 2353 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:48:00.001076 kubelet[2353]: E0909 00:48:00.001049 2353 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:48:00.131124 kubelet[2353]: I0909 00:48:00.130969 2353 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:48:00.131124 kubelet[2353]: E0909 00:48:00.131017 2353 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:48:00.172143 kubelet[2353]: E0909 00:48:00.172121 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:48:00.272428 kubelet[2353]: E0909 00:48:00.272235 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:48:00.373832 kubelet[2353]: E0909 00:48:00.373795 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:48:00.473924 kubelet[2353]: E0909 00:48:00.473857 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:48:00.574572 kubelet[2353]: E0909 00:48:00.574499 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:48:00.675137 kubelet[2353]: E0909 00:48:00.675102 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:48:00.781541 kubelet[2353]: I0909 00:48:00.781403 2353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:00.785635 kubelet[2353]: E0909 00:48:00.785621 2353 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:00.785853 kubelet[2353]: I0909 00:48:00.785700 2353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:48:00.786792 kubelet[2353]: E0909 00:48:00.786743 2353 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:48:00.786792 kubelet[2353]: I0909 00:48:00.786753 2353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:48:00.787545 kubelet[2353]: E0909 00:48:00.787532 2353 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:48:01.462411 kubelet[2353]: I0909 00:48:01.462381 2353 apiserver.go:52] "Watching apiserver" Sep 9 00:48:01.482885 kubelet[2353]: I0909 00:48:01.482857 2353 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:48:02.360014 kubelet[2353]: I0909 00:48:02.359420 2353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:02.759566 systemd[1]: Reloading requested from client PID 2633 ('systemctl') (unit session-9.scope)... Sep 9 00:48:02.759575 systemd[1]: Reloading... Sep 9 00:48:02.824003 zram_generator::config[2674]: No configuration found. Sep 9 00:48:02.888777 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:48:02.904030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:48:02.958232 systemd[1]: Reloading finished in 198 ms. Sep 9 00:48:02.986969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:48:03.000715 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:48:03.000855 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:48:03.006137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:48:03.377751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:48:03.381942 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:48:03.467440 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:48:03.467440 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:48:03.467440 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:48:03.468543 kubelet[2738]: I0909 00:48:03.468511 2738 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:48:03.479895 kubelet[2738]: I0909 00:48:03.479854 2738 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:48:03.479895 kubelet[2738]: I0909 00:48:03.479879 2738 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:48:03.480064 kubelet[2738]: I0909 00:48:03.480050 2738 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:48:03.480889 kubelet[2738]: I0909 00:48:03.480872 2738 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:48:03.490539 kubelet[2738]: I0909 00:48:03.490190 2738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:48:03.498904 kubelet[2738]: E0909 00:48:03.498884 2738 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:48:03.499040 kubelet[2738]: I0909 00:48:03.499032 2738 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:48:03.506151 kubelet[2738]: I0909 00:48:03.505829 2738 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:48:03.506536 kubelet[2738]: I0909 00:48:03.506434 2738 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:48:03.507726 kubelet[2738]: I0909 00:48:03.506585 2738 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:48:03.507726 kubelet[2738]: I0909 00:48:03.506708 2738 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:48:03.507726 kubelet[2738]: I0909 00:48:03.506716 2738 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:48:03.507726 kubelet[2738]: I0909 00:48:03.506749 2738 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:48:03.507726 kubelet[2738]: I0909 00:48:03.506883 2738 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:48:03.508001 kubelet[2738]: I0909 00:48:03.506894 2738 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:48:03.508001 kubelet[2738]: I0909 00:48:03.506910 2738 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:48:03.508001 kubelet[2738]: I0909 00:48:03.506923 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:48:03.512861 kubelet[2738]: I0909 00:48:03.512836 2738 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:48:03.513308 kubelet[2738]: I0909 00:48:03.513264 2738 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:48:03.523938 kubelet[2738]: I0909 00:48:03.523924 2738 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:48:03.525424 kubelet[2738]: I0909 00:48:03.525411 2738 server.go:1289] "Started kubelet" Sep 9 00:48:03.527732 kubelet[2738]: I0909 00:48:03.526771 2738 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:48:03.547329 kubelet[2738]: I0909 00:48:03.547309 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:48:03.561909 kubelet[2738]: I0909 00:48:03.561878 2738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:48:03.563614 kubelet[2738]: I0909 00:48:03.563603 2738 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:48:03.587945 kubelet[2738]: I0909 00:48:03.587915 2738 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:48:03.588066 kubelet[2738]: I0909 00:48:03.588031 2738 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:48:03.590708 kubelet[2738]: I0909 00:48:03.590481 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:48:03.606711 kubelet[2738]: I0909 00:48:03.606686 2738 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:48:03.615621 kubelet[2738]: I0909 00:48:03.615601 2738 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:48:03.622400 kubelet[2738]: I0909 00:48:03.621654 2738 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:48:03.622400 kubelet[2738]: I0909 00:48:03.621669 2738 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:48:03.622400 kubelet[2738]: I0909 00:48:03.621722 2738 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:48:03.622804 kubelet[2738]: I0909 00:48:03.622789 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:48:03.624487 kubelet[2738]: I0909 00:48:03.624475 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:48:03.626031 kubelet[2738]: I0909 00:48:03.626020 2738 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:48:03.626096 kubelet[2738]: I0909 00:48:03.626090 2738 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:48:03.626141 kubelet[2738]: I0909 00:48:03.626136 2738 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:48:03.626208 kubelet[2738]: E0909 00:48:03.626197 2738 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:48:03.657226 kubelet[2738]: I0909 00:48:03.657163 2738 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:48:03.657318 kubelet[2738]: I0909 00:48:03.657311 2738 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:48:03.657355 kubelet[2738]: I0909 00:48:03.657351 2738 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:48:03.657475 kubelet[2738]: I0909 00:48:03.657468 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:48:03.657520 kubelet[2738]: I0909 00:48:03.657507 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:48:03.657562 kubelet[2738]: I0909 00:48:03.657558 2738 policy_none.go:49] "None policy: Start" Sep 9 00:48:03.657595 kubelet[2738]: I0909 00:48:03.657591 2738 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:48:03.657634 kubelet[2738]: I0909 00:48:03.657629 2738 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:48:03.657719 kubelet[2738]: I0909 00:48:03.657713 2738 state_mem.go:75] "Updated machine memory state" Sep 9 00:48:03.672836 kubelet[2738]: E0909 00:48:03.672820 2738 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:48:03.673152 kubelet[2738]: I0909 00:48:03.673143 2738 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:48:03.673217 kubelet[2738]: I0909 00:48:03.673200 2738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:48:03.673455 kubelet[2738]: I0909 00:48:03.673448 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:48:03.674569 kubelet[2738]: E0909 00:48:03.674559 2738 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:48:03.726887 kubelet[2738]: I0909 00:48:03.726867 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:48:03.727649 kubelet[2738]: I0909 00:48:03.727628 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:03.727978 kubelet[2738]: I0909 00:48:03.727897 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:48:03.740002 kubelet[2738]: E0909 00:48:03.739966 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:03.777395 kubelet[2738]: I0909 00:48:03.777377 2738 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:48:03.799117 kubelet[2738]: I0909 00:48:03.798951 2738 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:48:03.799117 kubelet[2738]: I0909 00:48:03.799034 2738 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:48:03.888811 kubelet[2738]: I0909 00:48:03.888716 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d134483e73b0ed2684fee962c9e47783-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d134483e73b0ed2684fee962c9e47783\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:03.888811 kubelet[2738]: I0909 00:48:03.888744 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:48:03.888811 kubelet[2738]: I0909 00:48:03.888758 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:48:03.888811 kubelet[2738]: I0909 00:48:03.888768 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:48:03.888811 kubelet[2738]: I0909 00:48:03.888778 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:48:03.889052 kubelet[2738]: I0909 00:48:03.888790 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:48:03.889052 kubelet[2738]: I0909 00:48:03.888800 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d134483e73b0ed2684fee962c9e47783-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d134483e73b0ed2684fee962c9e47783\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:03.889052 kubelet[2738]: I0909 00:48:03.888808 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d134483e73b0ed2684fee962c9e47783-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d134483e73b0ed2684fee962c9e47783\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:03.889052 kubelet[2738]: I0909 00:48:03.888818 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:48:04.519418 kubelet[2738]: I0909 00:48:04.519394 2738 apiserver.go:52] "Watching apiserver" Sep 9 00:48:04.588467 kubelet[2738]: I0909 00:48:04.588436 2738 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:48:04.651881 kubelet[2738]: I0909 00:48:04.651859 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:04.652491 kubelet[2738]: I0909 00:48:04.652126 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:48:04.654780 kubelet[2738]: E0909 00:48:04.654758 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:48:04.656033 kubelet[2738]: E0909 00:48:04.655877 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:48:04.668182 kubelet[2738]: I0909 00:48:04.668127 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.668104323 podStartE2EDuration="2.668104323s" podCreationTimestamp="2025-09-09 00:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:48:04.667658838 +0000 UTC m=+1.236192119" watchObservedRunningTime="2025-09-09 00:48:04.668104323 +0000 UTC m=+1.236637600" Sep 9 00:48:04.668330 kubelet[2738]: I0909 00:48:04.668238 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6682321409999998 podStartE2EDuration="1.668232141s" podCreationTimestamp="2025-09-09 00:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:48:04.664184565 +0000 UTC m=+1.232717854" watchObservedRunningTime="2025-09-09 00:48:04.668232141 +0000 UTC m=+1.236765418" Sep 9 00:48:04.675828 kubelet[2738]: I0909 00:48:04.675803 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.675795707 podStartE2EDuration="1.675795707s" podCreationTimestamp="2025-09-09 00:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:48:04.672015643 +0000 UTC m=+1.240548932" watchObservedRunningTime="2025-09-09 00:48:04.675795707 +0000 UTC m=+1.244328996" Sep 9 00:48:08.599726 kubelet[2738]: I0909 00:48:08.599703 2738 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:48:08.600348 containerd[1540]: time="2025-09-09T00:48:08.600232702Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:48:08.600538 kubelet[2738]: I0909 00:48:08.600323 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:48:09.427786 systemd[1]: Created slice kubepods-besteffort-poda325944c_2ad2_4e00_b7bf_f93c80ac75bb.slice - libcontainer container kubepods-besteffort-poda325944c_2ad2_4e00_b7bf_f93c80ac75bb.slice. Sep 9 00:48:09.504717 systemd[1]: Created slice kubepods-besteffort-pod3760697a_ee51_4477_afb3_2b682adbabfc.slice - libcontainer container kubepods-besteffort-pod3760697a_ee51_4477_afb3_2b682adbabfc.slice. Sep 9 00:48:09.527959 kubelet[2738]: I0909 00:48:09.527757 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a325944c-2ad2-4e00-b7bf-f93c80ac75bb-lib-modules\") pod \"kube-proxy-5l75z\" (UID: \"a325944c-2ad2-4e00-b7bf-f93c80ac75bb\") " pod="kube-system/kube-proxy-5l75z" Sep 9 00:48:09.527959 kubelet[2738]: I0909 00:48:09.527798 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a325944c-2ad2-4e00-b7bf-f93c80ac75bb-kube-proxy\") pod \"kube-proxy-5l75z\" (UID: \"a325944c-2ad2-4e00-b7bf-f93c80ac75bb\") " pod="kube-system/kube-proxy-5l75z" Sep 9 00:48:09.527959 kubelet[2738]: I0909 00:48:09.527811 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a325944c-2ad2-4e00-b7bf-f93c80ac75bb-xtables-lock\") pod \"kube-proxy-5l75z\" (UID: \"a325944c-2ad2-4e00-b7bf-f93c80ac75bb\") " pod="kube-system/kube-proxy-5l75z" Sep 9 00:48:09.527959 kubelet[2738]: I0909 00:48:09.527821 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3760697a-ee51-4477-afb3-2b682adbabfc-var-lib-calico\") pod \"tigera-operator-755d956888-c296g\" (UID: \"3760697a-ee51-4477-afb3-2b682adbabfc\") " pod="tigera-operator/tigera-operator-755d956888-c296g" Sep 9 00:48:09.527959 kubelet[2738]: I0909 00:48:09.527833 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqqtp\" (UniqueName: \"kubernetes.io/projected/3760697a-ee51-4477-afb3-2b682adbabfc-kube-api-access-jqqtp\") pod \"tigera-operator-755d956888-c296g\" (UID: \"3760697a-ee51-4477-afb3-2b682adbabfc\") " pod="tigera-operator/tigera-operator-755d956888-c296g" Sep 9 00:48:09.528145 kubelet[2738]: I0909 00:48:09.527843 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j4cx\" (UniqueName: \"kubernetes.io/projected/a325944c-2ad2-4e00-b7bf-f93c80ac75bb-kube-api-access-6j4cx\") pod \"kube-proxy-5l75z\" (UID: \"a325944c-2ad2-4e00-b7bf-f93c80ac75bb\") " pod="kube-system/kube-proxy-5l75z" Sep 9 00:48:09.736056 containerd[1540]: time="2025-09-09T00:48:09.735950431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5l75z,Uid:a325944c-2ad2-4e00-b7bf-f93c80ac75bb,Namespace:kube-system,Attempt:0,}" Sep 9 00:48:09.763101 containerd[1540]: time="2025-09-09T00:48:09.762777937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:09.763101 containerd[1540]: time="2025-09-09T00:48:09.762828281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:09.763101 containerd[1540]: time="2025-09-09T00:48:09.762839511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:09.763101 containerd[1540]: time="2025-09-09T00:48:09.762915184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:09.789205 systemd[1]: Started cri-containerd-ccb72573ddfe260dff28df177617ca3c3496b5448cd6eb02f74db95c3ab42404.scope - libcontainer container ccb72573ddfe260dff28df177617ca3c3496b5448cd6eb02f74db95c3ab42404. Sep 9 00:48:09.806859 containerd[1540]: time="2025-09-09T00:48:09.806828824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5l75z,Uid:a325944c-2ad2-4e00-b7bf-f93c80ac75bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccb72573ddfe260dff28df177617ca3c3496b5448cd6eb02f74db95c3ab42404\"" Sep 9 00:48:09.808189 containerd[1540]: time="2025-09-09T00:48:09.808133681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-c296g,Uid:3760697a-ee51-4477-afb3-2b682adbabfc,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:48:09.821395 containerd[1540]: time="2025-09-09T00:48:09.821362569Z" level=info msg="CreateContainer within sandbox \"ccb72573ddfe260dff28df177617ca3c3496b5448cd6eb02f74db95c3ab42404\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:48:09.840181 containerd[1540]: time="2025-09-09T00:48:09.840073251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:09.840267 containerd[1540]: time="2025-09-09T00:48:09.840204850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:09.840267 containerd[1540]: time="2025-09-09T00:48:09.840237183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:09.840467 containerd[1540]: time="2025-09-09T00:48:09.840346393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:09.853120 systemd[1]: Started cri-containerd-299d65744e68f3811740ed465f8517a09178ceae77349b076d8ea92657be490a.scope - libcontainer container 299d65744e68f3811740ed465f8517a09178ceae77349b076d8ea92657be490a. Sep 9 00:48:09.864307 containerd[1540]: time="2025-09-09T00:48:09.864220204Z" level=info msg="CreateContainer within sandbox \"ccb72573ddfe260dff28df177617ca3c3496b5448cd6eb02f74db95c3ab42404\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aa98092c4eda789ef2bdab44624fcf35274b4e46fe81e9860d630f6daf72a019\"" Sep 9 00:48:09.865796 containerd[1540]: time="2025-09-09T00:48:09.865769356Z" level=info msg="StartContainer for \"aa98092c4eda789ef2bdab44624fcf35274b4e46fe81e9860d630f6daf72a019\"" Sep 9 00:48:09.889157 systemd[1]: Started cri-containerd-aa98092c4eda789ef2bdab44624fcf35274b4e46fe81e9860d630f6daf72a019.scope - libcontainer container aa98092c4eda789ef2bdab44624fcf35274b4e46fe81e9860d630f6daf72a019. Sep 9 00:48:09.899768 containerd[1540]: time="2025-09-09T00:48:09.899736486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-c296g,Uid:3760697a-ee51-4477-afb3-2b682adbabfc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"299d65744e68f3811740ed465f8517a09178ceae77349b076d8ea92657be490a\"" Sep 9 00:48:09.902196 containerd[1540]: time="2025-09-09T00:48:09.902158937Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:48:09.918615 containerd[1540]: time="2025-09-09T00:48:09.918583316Z" level=info msg="StartContainer for \"aa98092c4eda789ef2bdab44624fcf35274b4e46fe81e9860d630f6daf72a019\" returns successfully" Sep 9 00:48:10.639430 systemd[1]: run-containerd-runc-k8s.io-ccb72573ddfe260dff28df177617ca3c3496b5448cd6eb02f74db95c3ab42404-runc.1xZPGd.mount: Deactivated successfully. Sep 9 00:48:12.198869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310739505.mount: Deactivated successfully. Sep 9 00:48:12.431511 kubelet[2738]: I0909 00:48:12.431454 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5l75z" podStartSLOduration=3.43111115 podStartE2EDuration="3.43111115s" podCreationTimestamp="2025-09-09 00:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:48:10.684825851 +0000 UTC m=+7.253359141" watchObservedRunningTime="2025-09-09 00:48:12.43111115 +0000 UTC m=+8.999644435" Sep 9 00:48:13.292498 containerd[1540]: time="2025-09-09T00:48:13.292462715Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:13.299598 containerd[1540]: time="2025-09-09T00:48:13.299552176Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:48:13.306235 containerd[1540]: time="2025-09-09T00:48:13.306121961Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:13.315326 containerd[1540]: time="2025-09-09T00:48:13.315245109Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:13.316012 containerd[1540]: time="2025-09-09T00:48:13.315906877Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.413629246s" Sep 9 00:48:13.316012 containerd[1540]: time="2025-09-09T00:48:13.315932664Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:48:13.327365 containerd[1540]: time="2025-09-09T00:48:13.327337680Z" level=info msg="CreateContainer within sandbox \"299d65744e68f3811740ed465f8517a09178ceae77349b076d8ea92657be490a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:48:13.338735 containerd[1540]: time="2025-09-09T00:48:13.338696259Z" level=info msg="CreateContainer within sandbox \"299d65744e68f3811740ed465f8517a09178ceae77349b076d8ea92657be490a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f11554ca9215fba814c78b265945fa03e0b82bf8c2cab4fd595514b0ec56215b\"" Sep 9 00:48:13.340038 containerd[1540]: time="2025-09-09T00:48:13.339310995Z" level=info msg="StartContainer for \"f11554ca9215fba814c78b265945fa03e0b82bf8c2cab4fd595514b0ec56215b\"" Sep 9 00:48:13.366260 systemd[1]: Started cri-containerd-f11554ca9215fba814c78b265945fa03e0b82bf8c2cab4fd595514b0ec56215b.scope - libcontainer container f11554ca9215fba814c78b265945fa03e0b82bf8c2cab4fd595514b0ec56215b. Sep 9 00:48:13.386139 containerd[1540]: time="2025-09-09T00:48:13.386104931Z" level=info msg="StartContainer for \"f11554ca9215fba814c78b265945fa03e0b82bf8c2cab4fd595514b0ec56215b\" returns successfully" Sep 9 00:48:13.680882 kubelet[2738]: I0909 00:48:13.679159 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-c296g" podStartSLOduration=1.263686054 podStartE2EDuration="4.679147238s" podCreationTimestamp="2025-09-09 00:48:09 +0000 UTC" firstStartedPulling="2025-09-09 00:48:09.901058967 +0000 UTC m=+6.469592248" lastFinishedPulling="2025-09-09 00:48:13.316520149 +0000 UTC m=+9.885053432" observedRunningTime="2025-09-09 00:48:13.679067368 +0000 UTC m=+10.247600657" watchObservedRunningTime="2025-09-09 00:48:13.679147238 +0000 UTC m=+10.247680521" Sep 9 00:48:20.042514 sudo[1834]: pam_unix(sudo:session): session closed for user root Sep 9 00:48:20.062112 sshd[1831]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:20.066502 systemd[1]: sshd@6-139.178.70.101:22-139.178.89.65:35128.service: Deactivated successfully. Sep 9 00:48:20.069202 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:48:20.069354 systemd[1]: session-9.scope: Consumed 3.108s CPU time, 147.2M memory peak, 0B memory swap peak. Sep 9 00:48:20.069936 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:48:20.070930 systemd-logind[1516]: Removed session 9. Sep 9 00:48:22.760698 kubelet[2738]: I0909 00:48:22.760097 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/897677a8-19ee-43ef-a34a-11b4ea9e764b-typha-certs\") pod \"calico-typha-77b8df694b-zp8gk\" (UID: \"897677a8-19ee-43ef-a34a-11b4ea9e764b\") " pod="calico-system/calico-typha-77b8df694b-zp8gk" Sep 9 00:48:22.760698 kubelet[2738]: I0909 00:48:22.760140 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f26408e1-97b6-4db9-9588-833640941f33-var-run-calico\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.760698 kubelet[2738]: I0909 00:48:22.760236 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f26408e1-97b6-4db9-9588-833640941f33-cni-bin-dir\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.760698 kubelet[2738]: I0909 00:48:22.760255 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f26408e1-97b6-4db9-9588-833640941f33-var-lib-calico\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.760698 kubelet[2738]: I0909 00:48:22.760270 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bwx6\" (UniqueName: \"kubernetes.io/projected/897677a8-19ee-43ef-a34a-11b4ea9e764b-kube-api-access-4bwx6\") pod \"calico-typha-77b8df694b-zp8gk\" (UID: \"897677a8-19ee-43ef-a34a-11b4ea9e764b\") " pod="calico-system/calico-typha-77b8df694b-zp8gk" Sep 9 00:48:22.761050 kubelet[2738]: I0909 00:48:22.760355 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f26408e1-97b6-4db9-9588-833640941f33-cni-log-dir\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.761050 kubelet[2738]: I0909 00:48:22.760370 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/897677a8-19ee-43ef-a34a-11b4ea9e764b-tigera-ca-bundle\") pod \"calico-typha-77b8df694b-zp8gk\" (UID: \"897677a8-19ee-43ef-a34a-11b4ea9e764b\") " pod="calico-system/calico-typha-77b8df694b-zp8gk" Sep 9 00:48:22.761050 kubelet[2738]: I0909 00:48:22.760439 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f26408e1-97b6-4db9-9588-833640941f33-flexvol-driver-host\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.761050 kubelet[2738]: I0909 00:48:22.760450 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f26408e1-97b6-4db9-9588-833640941f33-lib-modules\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.761050 kubelet[2738]: I0909 00:48:22.760520 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f26408e1-97b6-4db9-9588-833640941f33-tigera-ca-bundle\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.770292 kubelet[2738]: I0909 00:48:22.760533 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f26408e1-97b6-4db9-9588-833640941f33-cni-net-dir\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.770292 kubelet[2738]: I0909 00:48:22.760577 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f26408e1-97b6-4db9-9588-833640941f33-xtables-lock\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.770292 kubelet[2738]: I0909 00:48:22.760590 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svjgh\" (UniqueName: \"kubernetes.io/projected/f26408e1-97b6-4db9-9588-833640941f33-kube-api-access-svjgh\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.770292 kubelet[2738]: I0909 00:48:22.760664 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f26408e1-97b6-4db9-9588-833640941f33-policysync\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.770292 kubelet[2738]: I0909 00:48:22.760679 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f26408e1-97b6-4db9-9588-833640941f33-node-certs\") pod \"calico-node-mqh6g\" (UID: \"f26408e1-97b6-4db9-9588-833640941f33\") " pod="calico-system/calico-node-mqh6g" Sep 9 00:48:22.801095 systemd[1]: Created slice kubepods-besteffort-pod897677a8_19ee_43ef_a34a_11b4ea9e764b.slice - libcontainer container kubepods-besteffort-pod897677a8_19ee_43ef_a34a_11b4ea9e764b.slice. Sep 9 00:48:22.815674 systemd[1]: Created slice kubepods-besteffort-podf26408e1_97b6_4db9_9588_833640941f33.slice - libcontainer container kubepods-besteffort-podf26408e1_97b6_4db9_9588_833640941f33.slice. Sep 9 00:48:22.826810 kubelet[2738]: E0909 00:48:22.826653 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kb7s" podUID="09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5" Sep 9 00:48:22.861498 kubelet[2738]: I0909 00:48:22.861470 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5-kubelet-dir\") pod \"csi-node-driver-6kb7s\" (UID: \"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5\") " pod="calico-system/csi-node-driver-6kb7s" Sep 9 00:48:22.862149 kubelet[2738]: I0909 00:48:22.861923 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5-registration-dir\") pod \"csi-node-driver-6kb7s\" (UID: \"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5\") " pod="calico-system/csi-node-driver-6kb7s" Sep 9 00:48:22.862149 kubelet[2738]: I0909 00:48:22.861941 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5-socket-dir\") pod \"csi-node-driver-6kb7s\" (UID: \"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5\") " pod="calico-system/csi-node-driver-6kb7s" Sep 9 00:48:22.862149 kubelet[2738]: I0909 00:48:22.861953 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5-varrun\") pod \"csi-node-driver-6kb7s\" (UID: \"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5\") " pod="calico-system/csi-node-driver-6kb7s" Sep 9 00:48:22.862149 kubelet[2738]: I0909 00:48:22.861972 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf826\" (UniqueName: \"kubernetes.io/projected/09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5-kube-api-access-sf826\") pod \"csi-node-driver-6kb7s\" (UID: \"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5\") " pod="calico-system/csi-node-driver-6kb7s" Sep 9 00:48:22.935134 kubelet[2738]: E0909 00:48:22.934900 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.935134 kubelet[2738]: W0909 00:48:22.934923 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.935134 kubelet[2738]: E0909 00:48:22.934943 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.935582 kubelet[2738]: E0909 00:48:22.935575 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.935645 kubelet[2738]: W0909 00:48:22.935637 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.935707 kubelet[2738]: E0909 00:48:22.935691 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.937146 kubelet[2738]: E0909 00:48:22.937054 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.937146 kubelet[2738]: W0909 00:48:22.937114 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.937146 kubelet[2738]: E0909 00:48:22.937132 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.942758 kubelet[2738]: E0909 00:48:22.942690 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.942758 kubelet[2738]: W0909 00:48:22.942707 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.942758 kubelet[2738]: E0909 00:48:22.942724 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.963186 kubelet[2738]: E0909 00:48:22.963127 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.963445 kubelet[2738]: W0909 00:48:22.963164 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.963445 kubelet[2738]: E0909 00:48:22.963263 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.963640 kubelet[2738]: E0909 00:48:22.963631 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.963705 kubelet[2738]: W0909 00:48:22.963681 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.963705 kubelet[2738]: E0909 00:48:22.963695 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.964017 kubelet[2738]: E0909 00:48:22.963933 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.964017 kubelet[2738]: W0909 00:48:22.963942 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.964017 kubelet[2738]: E0909 00:48:22.963950 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.964314 kubelet[2738]: E0909 00:48:22.964229 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.964314 kubelet[2738]: W0909 00:48:22.964236 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.964314 kubelet[2738]: E0909 00:48:22.964242 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.964568 kubelet[2738]: E0909 00:48:22.964543 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.964568 kubelet[2738]: W0909 00:48:22.964551 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.964568 kubelet[2738]: E0909 00:48:22.964559 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.964862 kubelet[2738]: E0909 00:48:22.964805 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.964862 kubelet[2738]: W0909 00:48:22.964812 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.964862 kubelet[2738]: E0909 00:48:22.964818 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.974131 kubelet[2738]: E0909 00:48:22.965242 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.974131 kubelet[2738]: W0909 00:48:22.965250 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.974131 kubelet[2738]: E0909 00:48:22.965260 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.974131 kubelet[2738]: E0909 00:48:22.965389 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.974131 kubelet[2738]: W0909 00:48:22.965394 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.974131 kubelet[2738]: E0909 00:48:22.965400 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.974131 kubelet[2738]: E0909 00:48:22.965525 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.974131 kubelet[2738]: W0909 00:48:22.965530 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.974131 kubelet[2738]: E0909 00:48:22.965536 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.974131 kubelet[2738]: E0909 00:48:22.965663 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.974365 kubelet[2738]: W0909 00:48:22.965668 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.974365 kubelet[2738]: E0909 00:48:22.965675 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.974365 kubelet[2738]: E0909 00:48:22.965795 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.974365 kubelet[2738]: W0909 00:48:22.965802 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.974365 kubelet[2738]: E0909 00:48:22.965809 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.974365 kubelet[2738]: E0909 00:48:22.965927 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.974365 kubelet[2738]: W0909 00:48:22.965934 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.974365 kubelet[2738]: E0909 00:48:22.965940 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.974365 kubelet[2738]: E0909 00:48:22.966097 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.974365 kubelet[2738]: W0909 00:48:22.966103 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982548 kubelet[2738]: E0909 00:48:22.966111 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982548 kubelet[2738]: E0909 00:48:22.966219 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982548 kubelet[2738]: W0909 00:48:22.966224 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982548 kubelet[2738]: E0909 00:48:22.966229 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982548 kubelet[2738]: E0909 00:48:22.966329 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982548 kubelet[2738]: W0909 00:48:22.966336 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982548 kubelet[2738]: E0909 00:48:22.966349 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982548 kubelet[2738]: E0909 00:48:22.966484 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982548 kubelet[2738]: W0909 00:48:22.966491 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982548 kubelet[2738]: E0909 00:48:22.966496 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982751 kubelet[2738]: E0909 00:48:22.966616 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982751 kubelet[2738]: W0909 00:48:22.966621 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982751 kubelet[2738]: E0909 00:48:22.966626 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982751 kubelet[2738]: E0909 00:48:22.966735 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982751 kubelet[2738]: W0909 00:48:22.966741 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982751 kubelet[2738]: E0909 00:48:22.966748 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982751 kubelet[2738]: E0909 00:48:22.966905 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982751 kubelet[2738]: W0909 00:48:22.966914 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982751 kubelet[2738]: E0909 00:48:22.966919 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982751 kubelet[2738]: E0909 00:48:22.967044 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982969 kubelet[2738]: W0909 00:48:22.967049 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982969 kubelet[2738]: E0909 00:48:22.967054 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982969 kubelet[2738]: E0909 00:48:22.967175 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982969 kubelet[2738]: W0909 00:48:22.967180 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982969 kubelet[2738]: E0909 00:48:22.967185 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982969 kubelet[2738]: E0909 00:48:22.967278 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982969 kubelet[2738]: W0909 00:48:22.967282 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.982969 kubelet[2738]: E0909 00:48:22.967287 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.982969 kubelet[2738]: E0909 00:48:22.967375 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.982969 kubelet[2738]: W0909 00:48:22.967381 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.983298 kubelet[2738]: E0909 00:48:22.967386 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.983298 kubelet[2738]: E0909 00:48:22.967488 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.983298 kubelet[2738]: W0909 00:48:22.967493 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.983298 kubelet[2738]: E0909 00:48:22.967497 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.983298 kubelet[2738]: E0909 00:48:22.975339 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.983298 kubelet[2738]: W0909 00:48:22.975360 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.983298 kubelet[2738]: E0909 00:48:22.975377 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:22.991104 kubelet[2738]: E0909 00:48:22.991084 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:48:22.991202 kubelet[2738]: W0909 00:48:22.991193 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:48:22.991285 kubelet[2738]: E0909 00:48:22.991261 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:48:23.127152 containerd[1540]: time="2025-09-09T00:48:23.126789385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b8df694b-zp8gk,Uid:897677a8-19ee-43ef-a34a-11b4ea9e764b,Namespace:calico-system,Attempt:0,}" Sep 9 00:48:23.128061 containerd[1540]: time="2025-09-09T00:48:23.127216026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mqh6g,Uid:f26408e1-97b6-4db9-9588-833640941f33,Namespace:calico-system,Attempt:0,}" Sep 9 00:48:23.213633 containerd[1540]: time="2025-09-09T00:48:23.213369176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:23.214165 containerd[1540]: time="2025-09-09T00:48:23.213606185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:23.214449 containerd[1540]: time="2025-09-09T00:48:23.214228532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:23.214449 containerd[1540]: time="2025-09-09T00:48:23.214257068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:23.214449 containerd[1540]: time="2025-09-09T00:48:23.214264382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:23.214449 containerd[1540]: time="2025-09-09T00:48:23.214328953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:23.214700 containerd[1540]: time="2025-09-09T00:48:23.214138346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:23.214700 containerd[1540]: time="2025-09-09T00:48:23.214400534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:23.262137 systemd[1]: Started cri-containerd-5d3af1d5d6756d358d66388c938144b7a82ecf01bf3d54dfece4671cb9271804.scope - libcontainer container 5d3af1d5d6756d358d66388c938144b7a82ecf01bf3d54dfece4671cb9271804. Sep 9 00:48:23.267069 systemd[1]: Started cri-containerd-923df43181c406cf88a2ec5a632caa6e67c86c8fd69c2d784c532aee806d6c68.scope - libcontainer container 923df43181c406cf88a2ec5a632caa6e67c86c8fd69c2d784c532aee806d6c68. Sep 9 00:48:23.304083 containerd[1540]: time="2025-09-09T00:48:23.303685188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mqh6g,Uid:f26408e1-97b6-4db9-9588-833640941f33,Namespace:calico-system,Attempt:0,} returns sandbox id \"923df43181c406cf88a2ec5a632caa6e67c86c8fd69c2d784c532aee806d6c68\"" Sep 9 00:48:23.308961 containerd[1540]: time="2025-09-09T00:48:23.307252185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:48:23.342274 containerd[1540]: time="2025-09-09T00:48:23.342250235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b8df694b-zp8gk,Uid:897677a8-19ee-43ef-a34a-11b4ea9e764b,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d3af1d5d6756d358d66388c938144b7a82ecf01bf3d54dfece4671cb9271804\"" Sep 9 00:48:24.626416 kubelet[2738]: E0909 00:48:24.626376 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kb7s" podUID="09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5" Sep 9 00:48:24.806059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419803436.mount: Deactivated successfully. Sep 9 00:48:24.950183 containerd[1540]: time="2025-09-09T00:48:24.950091381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:24.980468 containerd[1540]: time="2025-09-09T00:48:24.980248541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 9 00:48:24.989881 containerd[1540]: time="2025-09-09T00:48:24.989096584Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:24.997912 containerd[1540]: time="2025-09-09T00:48:24.997468878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:24.997912 containerd[1540]: time="2025-09-09T00:48:24.997833611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.69055217s" Sep 9 00:48:24.997912 containerd[1540]: time="2025-09-09T00:48:24.997853737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:48:24.998837 containerd[1540]: time="2025-09-09T00:48:24.998601106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:48:25.013118 containerd[1540]: time="2025-09-09T00:48:25.013029270Z" level=info msg="CreateContainer within sandbox \"923df43181c406cf88a2ec5a632caa6e67c86c8fd69c2d784c532aee806d6c68\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:48:25.085743 containerd[1540]: time="2025-09-09T00:48:25.085716347Z" level=info msg="CreateContainer within sandbox \"923df43181c406cf88a2ec5a632caa6e67c86c8fd69c2d784c532aee806d6c68\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"47636a8300fad077e33c84ecf7a2b91b14626eb54b1840326a39c0a8b4109939\"" Sep 9 00:48:25.086431 containerd[1540]: time="2025-09-09T00:48:25.086420729Z" level=info msg="StartContainer for \"47636a8300fad077e33c84ecf7a2b91b14626eb54b1840326a39c0a8b4109939\"" Sep 9 00:48:25.111123 systemd[1]: Started cri-containerd-47636a8300fad077e33c84ecf7a2b91b14626eb54b1840326a39c0a8b4109939.scope - libcontainer container 47636a8300fad077e33c84ecf7a2b91b14626eb54b1840326a39c0a8b4109939. Sep 9 00:48:25.130607 containerd[1540]: time="2025-09-09T00:48:25.130580636Z" level=info msg="StartContainer for \"47636a8300fad077e33c84ecf7a2b91b14626eb54b1840326a39c0a8b4109939\" returns successfully" Sep 9 00:48:25.140052 systemd[1]: cri-containerd-47636a8300fad077e33c84ecf7a2b91b14626eb54b1840326a39c0a8b4109939.scope: Deactivated successfully. Sep 9 00:48:25.159379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47636a8300fad077e33c84ecf7a2b91b14626eb54b1840326a39c0a8b4109939-rootfs.mount: Deactivated successfully. Sep 9 00:48:25.641357 containerd[1540]: time="2025-09-09T00:48:25.621200442Z" level=info msg="shim disconnected" id=47636a8300fad077e33c84ecf7a2b91b14626eb54b1840326a39c0a8b4109939 namespace=k8s.io Sep 9 00:48:25.641520 containerd[1540]: time="2025-09-09T00:48:25.641359219Z" level=warning msg="cleaning up after shim disconnected" id=47636a8300fad077e33c84ecf7a2b91b14626eb54b1840326a39c0a8b4109939 namespace=k8s.io Sep 9 00:48:25.641520 containerd[1540]: time="2025-09-09T00:48:25.641371451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:48:26.628232 kubelet[2738]: E0909 00:48:26.627292 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kb7s" podUID="09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5" Sep 9 00:48:27.755614 containerd[1540]: time="2025-09-09T00:48:27.755027194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:27.762244 containerd[1540]: time="2025-09-09T00:48:27.759394861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 9 00:48:27.764150 containerd[1540]: time="2025-09-09T00:48:27.763392378Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:27.768804 containerd[1540]: time="2025-09-09T00:48:27.768768950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:27.771921 containerd[1540]: time="2025-09-09T00:48:27.771877532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.773071705s" Sep 9 00:48:27.771921 containerd[1540]: time="2025-09-09T00:48:27.771918153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:48:27.776888 containerd[1540]: time="2025-09-09T00:48:27.776859220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:48:27.795765 containerd[1540]: time="2025-09-09T00:48:27.795516728Z" level=info msg="CreateContainer within sandbox \"5d3af1d5d6756d358d66388c938144b7a82ecf01bf3d54dfece4671cb9271804\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:48:27.806896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527800676.mount: Deactivated successfully. Sep 9 00:48:27.809409 containerd[1540]: time="2025-09-09T00:48:27.809374817Z" level=info msg="CreateContainer within sandbox \"5d3af1d5d6756d358d66388c938144b7a82ecf01bf3d54dfece4671cb9271804\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0a1312f1bdb133289bdd2ea1786d68e6b836235f865734bf6e2416a63c4680a0\"" Sep 9 00:48:27.811268 containerd[1540]: time="2025-09-09T00:48:27.809739380Z" level=info msg="StartContainer for \"0a1312f1bdb133289bdd2ea1786d68e6b836235f865734bf6e2416a63c4680a0\"" Sep 9 00:48:27.858205 systemd[1]: Started cri-containerd-0a1312f1bdb133289bdd2ea1786d68e6b836235f865734bf6e2416a63c4680a0.scope - libcontainer container 0a1312f1bdb133289bdd2ea1786d68e6b836235f865734bf6e2416a63c4680a0. Sep 9 00:48:27.913455 containerd[1540]: time="2025-09-09T00:48:27.913394501Z" level=info msg="StartContainer for \"0a1312f1bdb133289bdd2ea1786d68e6b836235f865734bf6e2416a63c4680a0\" returns successfully" Sep 9 00:48:28.626842 kubelet[2738]: E0909 00:48:28.626757 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kb7s" podUID="09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5" Sep 9 00:48:29.799802 kubelet[2738]: I0909 00:48:29.799571 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:48:30.627914 kubelet[2738]: E0909 00:48:30.627279 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kb7s" podUID="09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5" Sep 9 00:48:31.302547 containerd[1540]: time="2025-09-09T00:48:31.302513035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:31.310250 containerd[1540]: time="2025-09-09T00:48:31.310003582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:48:31.317823 containerd[1540]: time="2025-09-09T00:48:31.315594803Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:31.324096 containerd[1540]: time="2025-09-09T00:48:31.323357705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:31.324096 containerd[1540]: time="2025-09-09T00:48:31.323780124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.546606769s" Sep 9 00:48:31.324096 containerd[1540]: time="2025-09-09T00:48:31.323794787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:48:31.331375 containerd[1540]: time="2025-09-09T00:48:31.331346398Z" level=info msg="CreateContainer within sandbox \"923df43181c406cf88a2ec5a632caa6e67c86c8fd69c2d784c532aee806d6c68\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:48:31.399260 containerd[1540]: time="2025-09-09T00:48:31.399218906Z" level=info msg="CreateContainer within sandbox \"923df43181c406cf88a2ec5a632caa6e67c86c8fd69c2d784c532aee806d6c68\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058\"" Sep 9 00:48:31.399642 containerd[1540]: time="2025-09-09T00:48:31.399628639Z" level=info msg="StartContainer for \"0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058\"" Sep 9 00:48:31.420109 systemd[1]: run-containerd-runc-k8s.io-0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058-runc.Rjayvn.mount: Deactivated successfully. Sep 9 00:48:31.433189 systemd[1]: Started cri-containerd-0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058.scope - libcontainer container 0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058. Sep 9 00:48:31.457120 containerd[1540]: time="2025-09-09T00:48:31.457079285Z" level=info msg="StartContainer for \"0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058\" returns successfully" Sep 9 00:48:31.897049 kubelet[2738]: I0909 00:48:31.888249 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77b8df694b-zp8gk" podStartSLOduration=5.403216275 podStartE2EDuration="9.832711962s" podCreationTimestamp="2025-09-09 00:48:22 +0000 UTC" firstStartedPulling="2025-09-09 00:48:23.343065969 +0000 UTC m=+19.911599246" lastFinishedPulling="2025-09-09 00:48:27.772561649 +0000 UTC m=+24.341094933" observedRunningTime="2025-09-09 00:48:28.907617836 +0000 UTC m=+25.476151125" watchObservedRunningTime="2025-09-09 00:48:31.832711962 +0000 UTC m=+28.401245251" Sep 9 00:48:32.626969 kubelet[2738]: E0909 00:48:32.626933 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kb7s" podUID="09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5" Sep 9 00:48:33.256469 systemd[1]: cri-containerd-0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058.scope: Deactivated successfully. Sep 9 00:48:33.322864 kubelet[2738]: I0909 00:48:33.322681 2738 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:48:33.327388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058-rootfs.mount: Deactivated successfully. Sep 9 00:48:33.396108 containerd[1540]: time="2025-09-09T00:48:33.395013829Z" level=info msg="shim disconnected" id=0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058 namespace=k8s.io Sep 9 00:48:33.396108 containerd[1540]: time="2025-09-09T00:48:33.395994328Z" level=warning msg="cleaning up after shim disconnected" id=0c3c9391b589ac25faa55971a921cf6b61352a0b9188597dbf0a45a03d396058 namespace=k8s.io Sep 9 00:48:33.396108 containerd[1540]: time="2025-09-09T00:48:33.396002460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:48:33.405217 containerd[1540]: time="2025-09-09T00:48:33.404672738Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 9 00:48:33.824877 kubelet[2738]: I0909 00:48:33.824313 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw2cb\" (UniqueName: \"kubernetes.io/projected/d663ab63-4876-4bed-b10d-08dca1619cbf-kube-api-access-pw2cb\") pod \"calico-apiserver-774984f779-zprwx\" (UID: \"d663ab63-4876-4bed-b10d-08dca1619cbf\") " pod="calico-apiserver/calico-apiserver-774984f779-zprwx" Sep 9 00:48:33.824877 kubelet[2738]: I0909 00:48:33.824363 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l225k\" (UniqueName: \"kubernetes.io/projected/cfbbe3a7-b775-4416-b316-762d53f81c5d-kube-api-access-l225k\") pod \"calico-kube-controllers-7cc69d998f-tp8b8\" (UID: \"cfbbe3a7-b775-4416-b316-762d53f81c5d\") " pod="calico-system/calico-kube-controllers-7cc69d998f-tp8b8" Sep 9 00:48:33.824877 kubelet[2738]: I0909 00:48:33.824384 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfbbe3a7-b775-4416-b316-762d53f81c5d-tigera-ca-bundle\") pod \"calico-kube-controllers-7cc69d998f-tp8b8\" (UID: \"cfbbe3a7-b775-4416-b316-762d53f81c5d\") " pod="calico-system/calico-kube-controllers-7cc69d998f-tp8b8" Sep 9 00:48:33.824877 kubelet[2738]: I0909 00:48:33.824411 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27k9\" (UniqueName: \"kubernetes.io/projected/fe7ec415-6d72-4459-b5ae-85006f84662b-kube-api-access-b27k9\") pod \"coredns-674b8bbfcf-7lw2w\" (UID: \"fe7ec415-6d72-4459-b5ae-85006f84662b\") " pod="kube-system/coredns-674b8bbfcf-7lw2w" Sep 9 00:48:33.824877 kubelet[2738]: I0909 00:48:33.824426 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0dc5384-6a49-429a-b83f-4f8249484a53-config-volume\") pod \"coredns-674b8bbfcf-zmwvb\" (UID: \"c0dc5384-6a49-429a-b83f-4f8249484a53\") " pod="kube-system/coredns-674b8bbfcf-zmwvb" Sep 9 00:48:33.825128 kubelet[2738]: I0909 00:48:33.824440 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcql6\" (UniqueName: \"kubernetes.io/projected/c0dc5384-6a49-429a-b83f-4f8249484a53-kube-api-access-xcql6\") pod \"coredns-674b8bbfcf-zmwvb\" (UID: \"c0dc5384-6a49-429a-b83f-4f8249484a53\") " pod="kube-system/coredns-674b8bbfcf-zmwvb" Sep 9 00:48:33.825128 kubelet[2738]: I0909 00:48:33.824453 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d663ab63-4876-4bed-b10d-08dca1619cbf-calico-apiserver-certs\") pod \"calico-apiserver-774984f779-zprwx\" (UID: \"d663ab63-4876-4bed-b10d-08dca1619cbf\") " pod="calico-apiserver/calico-apiserver-774984f779-zprwx" Sep 9 00:48:33.825128 kubelet[2738]: I0909 00:48:33.824470 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe7ec415-6d72-4459-b5ae-85006f84662b-config-volume\") pod \"coredns-674b8bbfcf-7lw2w\" (UID: \"fe7ec415-6d72-4459-b5ae-85006f84662b\") " pod="kube-system/coredns-674b8bbfcf-7lw2w" Sep 9 00:48:33.869269 systemd[1]: Created slice kubepods-besteffort-pod9bdcee72_ae1b_4328_a99d_e56db99df127.slice - libcontainer container kubepods-besteffort-pod9bdcee72_ae1b_4328_a99d_e56db99df127.slice. Sep 9 00:48:33.885324 containerd[1540]: time="2025-09-09T00:48:33.885292363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:48:33.890581 systemd[1]: Created slice kubepods-burstable-podc0dc5384_6a49_429a_b83f_4f8249484a53.slice - libcontainer container kubepods-burstable-podc0dc5384_6a49_429a_b83f_4f8249484a53.slice. Sep 9 00:48:33.907373 systemd[1]: Created slice kubepods-besteffort-podcfbbe3a7_b775_4416_b316_762d53f81c5d.slice - libcontainer container kubepods-besteffort-podcfbbe3a7_b775_4416_b316_762d53f81c5d.slice. Sep 9 00:48:33.910648 kubelet[2738]: E0909 00:48:33.903185 2738 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bdcee72_ae1b_4328_a99d_e56db99df127.slice\": RecentStats: unable to find data in memory cache]" Sep 9 00:48:33.917407 systemd[1]: Created slice kubepods-burstable-podfe7ec415_6d72_4459_b5ae_85006f84662b.slice - libcontainer container kubepods-burstable-podfe7ec415_6d72_4459_b5ae_85006f84662b.slice. Sep 9 00:48:33.923165 systemd[1]: Created slice kubepods-besteffort-pod23688cde_a65d_4c6c_9d95_5a6f71851bf6.slice - libcontainer container kubepods-besteffort-pod23688cde_a65d_4c6c_9d95_5a6f71851bf6.slice. Sep 9 00:48:33.926415 kubelet[2738]: I0909 00:48:33.924746 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bdcee72-ae1b-4328-a99d-e56db99df127-whisker-backend-key-pair\") pod \"whisker-6d688cd477-dl2q8\" (UID: \"9bdcee72-ae1b-4328-a99d-e56db99df127\") " pod="calico-system/whisker-6d688cd477-dl2q8" Sep 9 00:48:33.926415 kubelet[2738]: I0909 00:48:33.924778 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzcgm\" (UniqueName: \"kubernetes.io/projected/23688cde-a65d-4c6c-9d95-5a6f71851bf6-kube-api-access-tzcgm\") pod \"calico-apiserver-774984f779-qfzwz\" (UID: \"23688cde-a65d-4c6c-9d95-5a6f71851bf6\") " pod="calico-apiserver/calico-apiserver-774984f779-qfzwz" Sep 9 00:48:33.926415 kubelet[2738]: I0909 00:48:33.924799 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8skqg\" (UniqueName: \"kubernetes.io/projected/456193ac-db3d-4857-b7e3-7054cee2a893-kube-api-access-8skqg\") pod \"goldmane-54d579b49d-bg69g\" (UID: \"456193ac-db3d-4857-b7e3-7054cee2a893\") " pod="calico-system/goldmane-54d579b49d-bg69g" Sep 9 00:48:33.926415 kubelet[2738]: I0909 00:48:33.924828 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/456193ac-db3d-4857-b7e3-7054cee2a893-goldmane-key-pair\") pod \"goldmane-54d579b49d-bg69g\" (UID: \"456193ac-db3d-4857-b7e3-7054cee2a893\") " pod="calico-system/goldmane-54d579b49d-bg69g" Sep 9 00:48:33.926415 kubelet[2738]: I0909 00:48:33.924845 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jr4\" (UniqueName: \"kubernetes.io/projected/9bdcee72-ae1b-4328-a99d-e56db99df127-kube-api-access-p2jr4\") pod \"whisker-6d688cd477-dl2q8\" (UID: \"9bdcee72-ae1b-4328-a99d-e56db99df127\") " pod="calico-system/whisker-6d688cd477-dl2q8" Sep 9 00:48:33.926674 kubelet[2738]: I0909 00:48:33.924903 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/456193ac-db3d-4857-b7e3-7054cee2a893-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-bg69g\" (UID: \"456193ac-db3d-4857-b7e3-7054cee2a893\") " pod="calico-system/goldmane-54d579b49d-bg69g" Sep 9 00:48:33.926674 kubelet[2738]: I0909 00:48:33.924919 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdcee72-ae1b-4328-a99d-e56db99df127-whisker-ca-bundle\") pod \"whisker-6d688cd477-dl2q8\" (UID: \"9bdcee72-ae1b-4328-a99d-e56db99df127\") " pod="calico-system/whisker-6d688cd477-dl2q8" Sep 9 00:48:33.926674 kubelet[2738]: I0909 00:48:33.924936 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/23688cde-a65d-4c6c-9d95-5a6f71851bf6-calico-apiserver-certs\") pod \"calico-apiserver-774984f779-qfzwz\" (UID: \"23688cde-a65d-4c6c-9d95-5a6f71851bf6\") " pod="calico-apiserver/calico-apiserver-774984f779-qfzwz" Sep 9 00:48:33.926674 kubelet[2738]: I0909 00:48:33.924966 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456193ac-db3d-4857-b7e3-7054cee2a893-config\") pod \"goldmane-54d579b49d-bg69g\" (UID: \"456193ac-db3d-4857-b7e3-7054cee2a893\") " pod="calico-system/goldmane-54d579b49d-bg69g" Sep 9 00:48:33.933964 systemd[1]: Created slice kubepods-besteffort-podd663ab63_4876_4bed_b10d_08dca1619cbf.slice - libcontainer container kubepods-besteffort-podd663ab63_4876_4bed_b10d_08dca1619cbf.slice. Sep 9 00:48:34.001296 systemd[1]: Created slice kubepods-besteffort-pod456193ac_db3d_4857_b7e3_7054cee2a893.slice - libcontainer container kubepods-besteffort-pod456193ac_db3d_4857_b7e3_7054cee2a893.slice. Sep 9 00:48:34.196015 containerd[1540]: time="2025-09-09T00:48:34.195393028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d688cd477-dl2q8,Uid:9bdcee72-ae1b-4328-a99d-e56db99df127,Namespace:calico-system,Attempt:0,}" Sep 9 00:48:34.203621 containerd[1540]: time="2025-09-09T00:48:34.202779318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zmwvb,Uid:c0dc5384-6a49-429a-b83f-4f8249484a53,Namespace:kube-system,Attempt:0,}" Sep 9 00:48:34.215474 containerd[1540]: time="2025-09-09T00:48:34.215446765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cc69d998f-tp8b8,Uid:cfbbe3a7-b775-4416-b316-762d53f81c5d,Namespace:calico-system,Attempt:0,}" Sep 9 00:48:34.223678 containerd[1540]: time="2025-09-09T00:48:34.223651807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7lw2w,Uid:fe7ec415-6d72-4459-b5ae-85006f84662b,Namespace:kube-system,Attempt:0,}" Sep 9 00:48:34.247947 containerd[1540]: time="2025-09-09T00:48:34.247792878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774984f779-qfzwz,Uid:23688cde-a65d-4c6c-9d95-5a6f71851bf6,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:48:34.276591 containerd[1540]: time="2025-09-09T00:48:34.276566181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774984f779-zprwx,Uid:d663ab63-4876-4bed-b10d-08dca1619cbf,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:48:34.315120 containerd[1540]: time="2025-09-09T00:48:34.315095031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bg69g,Uid:456193ac-db3d-4857-b7e3-7054cee2a893,Namespace:calico-system,Attempt:0,}" Sep 9 00:48:34.633886 systemd[1]: Created slice kubepods-besteffort-pod09b5c400_88c5_4a1e_9b8c_a3d25d8f76d5.slice - libcontainer container kubepods-besteffort-pod09b5c400_88c5_4a1e_9b8c_a3d25d8f76d5.slice. Sep 9 00:48:34.637397 containerd[1540]: time="2025-09-09T00:48:34.637166267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6kb7s,Uid:09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5,Namespace:calico-system,Attempt:0,}" Sep 9 00:48:34.661158 containerd[1540]: time="2025-09-09T00:48:34.661064421Z" level=error msg="Failed to destroy network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.666615 containerd[1540]: time="2025-09-09T00:48:34.666561412Z" level=error msg="Failed to destroy network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.668323 containerd[1540]: time="2025-09-09T00:48:34.668251026Z" level=error msg="encountered an error cleaning up failed sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.668323 containerd[1540]: time="2025-09-09T00:48:34.668315268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d688cd477-dl2q8,Uid:9bdcee72-ae1b-4328-a99d-e56db99df127,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.668576 containerd[1540]: time="2025-09-09T00:48:34.668551396Z" level=error msg="encountered an error cleaning up failed sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.668612 containerd[1540]: time="2025-09-09T00:48:34.668590051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7lw2w,Uid:fe7ec415-6d72-4459-b5ae-85006f84662b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675078 containerd[1540]: time="2025-09-09T00:48:34.674292247Z" level=error msg="Failed to destroy network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675078 containerd[1540]: time="2025-09-09T00:48:34.674397944Z" level=error msg="Failed to destroy network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675078 containerd[1540]: time="2025-09-09T00:48:34.674720675Z" level=error msg="encountered an error cleaning up failed sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675078 containerd[1540]: time="2025-09-09T00:48:34.674750645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774984f779-qfzwz,Uid:23688cde-a65d-4c6c-9d95-5a6f71851bf6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675078 containerd[1540]: time="2025-09-09T00:48:34.674826657Z" level=error msg="Failed to destroy network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675308 containerd[1540]: time="2025-09-09T00:48:34.675114691Z" level=error msg="encountered an error cleaning up failed sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675308 containerd[1540]: time="2025-09-09T00:48:34.675139264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bg69g,Uid:456193ac-db3d-4857-b7e3-7054cee2a893,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675308 containerd[1540]: time="2025-09-09T00:48:34.675255892Z" level=error msg="Failed to destroy network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675748 containerd[1540]: time="2025-09-09T00:48:34.675472417Z" level=error msg="encountered an error cleaning up failed sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675748 containerd[1540]: time="2025-09-09T00:48:34.675507661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cc69d998f-tp8b8,Uid:cfbbe3a7-b775-4416-b316-762d53f81c5d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675748 containerd[1540]: time="2025-09-09T00:48:34.675580436Z" level=error msg="Failed to destroy network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675865 containerd[1540]: time="2025-09-09T00:48:34.675769195Z" level=error msg="encountered an error cleaning up failed sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.675865 containerd[1540]: time="2025-09-09T00:48:34.675815674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774984f779-zprwx,Uid:d663ab63-4876-4bed-b10d-08dca1619cbf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.676522 containerd[1540]: time="2025-09-09T00:48:34.676329422Z" level=error msg="encountered an error cleaning up failed sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.676634 containerd[1540]: time="2025-09-09T00:48:34.676614985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zmwvb,Uid:c0dc5384-6a49-429a-b83f-4f8249484a53,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.682979 kubelet[2738]: E0909 00:48:34.682939 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.684346 kubelet[2738]: E0909 00:48:34.684218 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zmwvb" Sep 9 00:48:34.685092 kubelet[2738]: E0909 00:48:34.685054 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.685846 kubelet[2738]: E0909 00:48:34.685820 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7lw2w" Sep 9 00:48:34.686626 kubelet[2738]: E0909 00:48:34.686612 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7lw2w" Sep 9 00:48:34.686766 kubelet[2738]: E0909 00:48:34.686740 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7lw2w_kube-system(fe7ec415-6d72-4459-b5ae-85006f84662b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7lw2w_kube-system(fe7ec415-6d72-4459-b5ae-85006f84662b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7lw2w" podUID="fe7ec415-6d72-4459-b5ae-85006f84662b" Sep 9 00:48:34.686955 kubelet[2738]: E0909 00:48:34.686582 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zmwvb" Sep 9 00:48:34.687134 kubelet[2738]: E0909 00:48:34.687120 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zmwvb_kube-system(c0dc5384-6a49-429a-b83f-4f8249484a53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zmwvb_kube-system(c0dc5384-6a49-429a-b83f-4f8249484a53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zmwvb" podUID="c0dc5384-6a49-429a-b83f-4f8249484a53" Sep 9 00:48:34.687295 kubelet[2738]: E0909 00:48:34.686900 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.687295 kubelet[2738]: E0909 00:48:34.687202 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-bg69g" Sep 9 00:48:34.687295 kubelet[2738]: E0909 00:48:34.687213 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-bg69g" Sep 9 00:48:34.687646 kubelet[2738]: E0909 00:48:34.687234 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-bg69g_calico-system(456193ac-db3d-4857-b7e3-7054cee2a893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-bg69g_calico-system(456193ac-db3d-4857-b7e3-7054cee2a893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-bg69g" podUID="456193ac-db3d-4857-b7e3-7054cee2a893" Sep 9 00:48:34.687646 kubelet[2738]: E0909 00:48:34.687024 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.687646 kubelet[2738]: E0909 00:48:34.687582 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-774984f779-zprwx" Sep 9 00:48:34.689525 kubelet[2738]: E0909 00:48:34.687598 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-774984f779-zprwx" Sep 9 00:48:34.689525 kubelet[2738]: E0909 00:48:34.687623 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-774984f779-zprwx_calico-apiserver(d663ab63-4876-4bed-b10d-08dca1619cbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-774984f779-zprwx_calico-apiserver(d663ab63-4876-4bed-b10d-08dca1619cbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-774984f779-zprwx" podUID="d663ab63-4876-4bed-b10d-08dca1619cbf" Sep 9 00:48:34.690017 kubelet[2738]: E0909 00:48:34.686887 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.690017 kubelet[2738]: E0909 00:48:34.689818 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-774984f779-qfzwz" Sep 9 00:48:34.690017 kubelet[2738]: E0909 00:48:34.689850 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-774984f779-qfzwz" Sep 9 00:48:34.690564 kubelet[2738]: E0909 00:48:34.689907 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-774984f779-qfzwz_calico-apiserver(23688cde-a65d-4c6c-9d95-5a6f71851bf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-774984f779-qfzwz_calico-apiserver(23688cde-a65d-4c6c-9d95-5a6f71851bf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-774984f779-qfzwz" podUID="23688cde-a65d-4c6c-9d95-5a6f71851bf6" Sep 9 00:48:34.690564 kubelet[2738]: E0909 00:48:34.686914 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.690564 kubelet[2738]: E0909 00:48:34.689951 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cc69d998f-tp8b8" Sep 9 00:48:34.691007 kubelet[2738]: E0909 00:48:34.690496 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cc69d998f-tp8b8" Sep 9 00:48:34.691007 kubelet[2738]: E0909 00:48:34.690771 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cc69d998f-tp8b8_calico-system(cfbbe3a7-b775-4416-b316-762d53f81c5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cc69d998f-tp8b8_calico-system(cfbbe3a7-b775-4416-b316-762d53f81c5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cc69d998f-tp8b8" podUID="cfbbe3a7-b775-4416-b316-762d53f81c5d" Sep 9 00:48:34.691825 kubelet[2738]: E0909 00:48:34.691554 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.691825 kubelet[2738]: E0909 00:48:34.691747 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d688cd477-dl2q8" Sep 9 00:48:34.691825 kubelet[2738]: E0909 00:48:34.691770 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d688cd477-dl2q8" Sep 9 00:48:34.692248 kubelet[2738]: E0909 00:48:34.691805 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d688cd477-dl2q8_calico-system(9bdcee72-ae1b-4328-a99d-e56db99df127)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d688cd477-dl2q8_calico-system(9bdcee72-ae1b-4328-a99d-e56db99df127)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d688cd477-dl2q8" podUID="9bdcee72-ae1b-4328-a99d-e56db99df127" Sep 9 00:48:34.730352 containerd[1540]: time="2025-09-09T00:48:34.730317164Z" level=error msg="Failed to destroy network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.730628 containerd[1540]: time="2025-09-09T00:48:34.730608488Z" level=error msg="encountered an error cleaning up failed sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.730673 containerd[1540]: time="2025-09-09T00:48:34.730645565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6kb7s,Uid:09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.731016 kubelet[2738]: E0909 00:48:34.730801 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:34.731016 kubelet[2738]: E0909 00:48:34.730868 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6kb7s" Sep 9 00:48:34.731016 kubelet[2738]: E0909 00:48:34.730889 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6kb7s" Sep 9 00:48:34.731230 kubelet[2738]: E0909 00:48:34.730936 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6kb7s_calico-system(09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6kb7s_calico-system(09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6kb7s" podUID="09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5" Sep 9 00:48:34.916914 kubelet[2738]: I0909 00:48:34.916818 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:48:34.919011 kubelet[2738]: I0909 00:48:34.918947 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:48:34.951387 kubelet[2738]: I0909 00:48:34.950661 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:48:34.952196 kubelet[2738]: I0909 00:48:34.951471 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:48:34.952355 kubelet[2738]: I0909 00:48:34.952333 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:48:34.953125 kubelet[2738]: I0909 00:48:34.953100 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:48:34.954333 kubelet[2738]: I0909 00:48:34.954316 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:48:34.955131 kubelet[2738]: I0909 00:48:34.955119 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:48:34.990614 containerd[1540]: time="2025-09-09T00:48:34.990028052Z" level=info msg="StopPodSandbox for \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\"" Sep 9 00:48:34.991903 containerd[1540]: time="2025-09-09T00:48:34.991628787Z" level=info msg="Ensure that sandbox 17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5 in task-service has been cleanup successfully" Sep 9 00:48:34.992803 containerd[1540]: time="2025-09-09T00:48:34.992783310Z" level=info msg="StopPodSandbox for \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\"" Sep 9 00:48:34.993061 containerd[1540]: time="2025-09-09T00:48:34.993040415Z" level=info msg="Ensure that sandbox 63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc in task-service has been cleanup successfully" Sep 9 00:48:34.993274 containerd[1540]: time="2025-09-09T00:48:34.993148077Z" level=info msg="StopPodSandbox for \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\"" Sep 9 00:48:34.993508 containerd[1540]: time="2025-09-09T00:48:34.993454587Z" level=info msg="Ensure that sandbox 1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459 in task-service has been cleanup successfully" Sep 9 00:48:34.994163 containerd[1540]: time="2025-09-09T00:48:34.993061097Z" level=info msg="StopPodSandbox for \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\"" Sep 9 00:48:34.994163 containerd[1540]: time="2025-09-09T00:48:34.994143937Z" level=info msg="Ensure that sandbox 7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d in task-service has been cleanup successfully" Sep 9 00:48:34.995816 containerd[1540]: time="2025-09-09T00:48:34.995798350Z" level=info msg="StopPodSandbox for \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\"" Sep 9 00:48:34.996133 containerd[1540]: time="2025-09-09T00:48:34.995981232Z" level=info msg="Ensure that sandbox 3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e in task-service has been cleanup successfully" Sep 9 00:48:34.996427 containerd[1540]: time="2025-09-09T00:48:34.996406345Z" level=info msg="StopPodSandbox for \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\"" Sep 9 00:48:34.996532 containerd[1540]: time="2025-09-09T00:48:34.996513200Z" level=info msg="Ensure that sandbox 2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e in task-service has been cleanup successfully" Sep 9 00:48:34.996745 containerd[1540]: time="2025-09-09T00:48:34.993100709Z" level=info msg="StopPodSandbox for \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\"" Sep 9 00:48:34.996830 containerd[1540]: time="2025-09-09T00:48:34.996811504Z" level=info msg="Ensure that sandbox cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d in task-service has been cleanup successfully" Sep 9 00:48:34.997965 containerd[1540]: time="2025-09-09T00:48:34.993118329Z" level=info msg="StopPodSandbox for \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\"" Sep 9 00:48:34.998117 containerd[1540]: time="2025-09-09T00:48:34.998064888Z" level=info msg="Ensure that sandbox 23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c in task-service has been cleanup successfully" Sep 9 00:48:35.074703 containerd[1540]: time="2025-09-09T00:48:35.074646150Z" level=error msg="StopPodSandbox for \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\" failed" error="failed to destroy network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:35.076611 kubelet[2738]: E0909 00:48:35.075011 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:48:35.076611 kubelet[2738]: E0909 00:48:35.076202 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc"} Sep 9 00:48:35.076611 kubelet[2738]: E0909 00:48:35.076247 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe7ec415-6d72-4459-b5ae-85006f84662b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:48:35.076611 kubelet[2738]: E0909 00:48:35.076274 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe7ec415-6d72-4459-b5ae-85006f84662b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7lw2w" podUID="fe7ec415-6d72-4459-b5ae-85006f84662b" Sep 9 00:48:35.077200 containerd[1540]: time="2025-09-09T00:48:35.077101717Z" level=error msg="StopPodSandbox for \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\" failed" error="failed to destroy network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:35.077406 kubelet[2738]: E0909 00:48:35.077247 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:48:35.077406 kubelet[2738]: E0909 00:48:35.077282 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d"} Sep 9 00:48:35.077406 kubelet[2738]: E0909 00:48:35.077306 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cfbbe3a7-b775-4416-b316-762d53f81c5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:48:35.077406 kubelet[2738]: E0909 00:48:35.077334 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cfbbe3a7-b775-4416-b316-762d53f81c5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cc69d998f-tp8b8" podUID="cfbbe3a7-b775-4416-b316-762d53f81c5d" Sep 9 00:48:35.079030 containerd[1540]: time="2025-09-09T00:48:35.078796708Z" level=error msg="StopPodSandbox for \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\" failed" error="failed to destroy network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:35.079147 kubelet[2738]: E0909 00:48:35.078959 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:48:35.079147 kubelet[2738]: E0909 00:48:35.079002 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5"} Sep 9 00:48:35.079147 kubelet[2738]: E0909 00:48:35.079027 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c0dc5384-6a49-429a-b83f-4f8249484a53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:48:35.079147 kubelet[2738]: E0909 00:48:35.079045 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c0dc5384-6a49-429a-b83f-4f8249484a53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zmwvb" podUID="c0dc5384-6a49-429a-b83f-4f8249484a53" Sep 9 00:48:35.080747 containerd[1540]: time="2025-09-09T00:48:35.080397860Z" level=error msg="StopPodSandbox for \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\" failed" error="failed to destroy network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:35.080831 kubelet[2738]: E0909 00:48:35.080534 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:48:35.080831 kubelet[2738]: E0909 00:48:35.080572 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e"} Sep 9 00:48:35.080831 kubelet[2738]: E0909 00:48:35.080602 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23688cde-a65d-4c6c-9d95-5a6f71851bf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:48:35.080831 kubelet[2738]: E0909 00:48:35.080623 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23688cde-a65d-4c6c-9d95-5a6f71851bf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-774984f779-qfzwz" podUID="23688cde-a65d-4c6c-9d95-5a6f71851bf6" Sep 9 00:48:35.081121 containerd[1540]: time="2025-09-09T00:48:35.081101706Z" level=error msg="StopPodSandbox for \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\" failed" error="failed to destroy network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:35.081785 kubelet[2738]: E0909 00:48:35.081250 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:48:35.081785 kubelet[2738]: E0909 00:48:35.081280 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e"} Sep 9 00:48:35.081785 kubelet[2738]: E0909 00:48:35.081303 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:48:35.081785 kubelet[2738]: E0909 00:48:35.081317 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6kb7s" podUID="09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5" Sep 9 00:48:35.082671 containerd[1540]: time="2025-09-09T00:48:35.082646738Z" level=error msg="StopPodSandbox for \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\" failed" error="failed to destroy network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:35.082880 kubelet[2738]: E0909 00:48:35.082766 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:48:35.082880 kubelet[2738]: E0909 00:48:35.082792 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459"} Sep 9 00:48:35.082880 kubelet[2738]: E0909 00:48:35.082814 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9bdcee72-ae1b-4328-a99d-e56db99df127\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:48:35.082880 kubelet[2738]: E0909 00:48:35.082830 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9bdcee72-ae1b-4328-a99d-e56db99df127\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d688cd477-dl2q8" podUID="9bdcee72-ae1b-4328-a99d-e56db99df127" Sep 9 00:48:35.084702 containerd[1540]: time="2025-09-09T00:48:35.084676102Z" level=error msg="StopPodSandbox for \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\" failed" error="failed to destroy network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:35.084964 kubelet[2738]: E0909 00:48:35.084871 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:48:35.084964 kubelet[2738]: E0909 00:48:35.084902 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c"} Sep 9 00:48:35.084964 kubelet[2738]: E0909 00:48:35.084920 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"456193ac-db3d-4857-b7e3-7054cee2a893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:48:35.084964 kubelet[2738]: E0909 00:48:35.084938 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"456193ac-db3d-4857-b7e3-7054cee2a893\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-bg69g" podUID="456193ac-db3d-4857-b7e3-7054cee2a893" Sep 9 00:48:35.085702 containerd[1540]: time="2025-09-09T00:48:35.085670945Z" level=error msg="StopPodSandbox for \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\" failed" error="failed to destroy network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:48:35.085876 kubelet[2738]: E0909 00:48:35.085786 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:48:35.085876 kubelet[2738]: E0909 00:48:35.085816 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d"} Sep 9 00:48:35.085876 kubelet[2738]: E0909 00:48:35.085843 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d663ab63-4876-4bed-b10d-08dca1619cbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:48:35.085876 kubelet[2738]: E0909 00:48:35.085858 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d663ab63-4876-4bed-b10d-08dca1619cbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-774984f779-zprwx" podUID="d663ab63-4876-4bed-b10d-08dca1619cbf" Sep 9 00:48:35.330484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c-shm.mount: Deactivated successfully. Sep 9 00:48:35.330590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d-shm.mount: Deactivated successfully. Sep 9 00:48:35.330642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e-shm.mount: Deactivated successfully. Sep 9 00:48:35.330681 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc-shm.mount: Deactivated successfully. Sep 9 00:48:35.330729 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d-shm.mount: Deactivated successfully. Sep 9 00:48:35.330775 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5-shm.mount: Deactivated successfully. Sep 9 00:48:35.330812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459-shm.mount: Deactivated successfully. Sep 9 00:48:40.590255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880430675.mount: Deactivated successfully. Sep 9 00:48:40.820393 containerd[1540]: time="2025-09-09T00:48:40.820329073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:40.823268 containerd[1540]: time="2025-09-09T00:48:40.823229303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:48:40.870801 containerd[1540]: time="2025-09-09T00:48:40.870595382Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:40.871191 containerd[1540]: time="2025-09-09T00:48:40.871094038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.985614505s" Sep 9 00:48:40.871191 containerd[1540]: time="2025-09-09T00:48:40.871115807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:48:40.872097 containerd[1540]: time="2025-09-09T00:48:40.872068269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:40.939299 containerd[1540]: time="2025-09-09T00:48:40.939262949Z" level=info msg="CreateContainer within sandbox \"923df43181c406cf88a2ec5a632caa6e67c86c8fd69c2d784c532aee806d6c68\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:48:41.116363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214136353.mount: Deactivated successfully. Sep 9 00:48:41.146088 containerd[1540]: time="2025-09-09T00:48:41.145955518Z" level=info msg="CreateContainer within sandbox \"923df43181c406cf88a2ec5a632caa6e67c86c8fd69c2d784c532aee806d6c68\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b1c50660b2d94b2f328f6c145dba3ca47902c7b9b60d94220a1a7ed888948a77\"" Sep 9 00:48:41.146494 containerd[1540]: time="2025-09-09T00:48:41.146422467Z" level=info msg="StartContainer for \"b1c50660b2d94b2f328f6c145dba3ca47902c7b9b60d94220a1a7ed888948a77\"" Sep 9 00:48:41.406115 systemd[1]: Started cri-containerd-b1c50660b2d94b2f328f6c145dba3ca47902c7b9b60d94220a1a7ed888948a77.scope - libcontainer container b1c50660b2d94b2f328f6c145dba3ca47902c7b9b60d94220a1a7ed888948a77. Sep 9 00:48:41.446189 containerd[1540]: time="2025-09-09T00:48:41.446155546Z" level=info msg="StartContainer for \"b1c50660b2d94b2f328f6c145dba3ca47902c7b9b60d94220a1a7ed888948a77\" returns successfully" Sep 9 00:48:41.739086 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:48:41.777242 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:48:42.134266 kubelet[2738]: I0909 00:48:42.134087 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mqh6g" podStartSLOduration=2.56333427 podStartE2EDuration="20.134073332s" podCreationTimestamp="2025-09-09 00:48:22 +0000 UTC" firstStartedPulling="2025-09-09 00:48:23.30684902 +0000 UTC m=+19.875382300" lastFinishedPulling="2025-09-09 00:48:40.877588085 +0000 UTC m=+37.446121362" observedRunningTime="2025-09-09 00:48:42.099677636 +0000 UTC m=+38.668210918" watchObservedRunningTime="2025-09-09 00:48:42.134073332 +0000 UTC m=+38.702606616" Sep 9 00:48:42.758508 containerd[1540]: time="2025-09-09T00:48:42.758485258Z" level=info msg="StopPodSandbox for \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\"" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:42.863 [INFO][3896] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:42.864 [INFO][3896] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" iface="eth0" netns="/var/run/netns/cni-cf3e0b7d-e685-ff43-d66f-8fb5288d2b04" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:42.865 [INFO][3896] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" iface="eth0" netns="/var/run/netns/cni-cf3e0b7d-e685-ff43-d66f-8fb5288d2b04" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:42.865 [INFO][3896] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" iface="eth0" netns="/var/run/netns/cni-cf3e0b7d-e685-ff43-d66f-8fb5288d2b04" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:42.865 [INFO][3896] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:42.865 [INFO][3896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:43.462 [INFO][3913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" HandleID="k8s-pod-network.1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Workload="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:43.490 [INFO][3913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:43.491 [INFO][3913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:43.504 [WARNING][3913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" HandleID="k8s-pod-network.1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Workload="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:43.504 [INFO][3913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" HandleID="k8s-pod-network.1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Workload="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:43.505 [INFO][3913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:43.508551 containerd[1540]: 2025-09-09 00:48:43.506 [INFO][3896] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:48:43.510145 systemd[1]: run-netns-cni\x2dcf3e0b7d\x2de685\x2dff43\x2dd66f\x2d8fb5288d2b04.mount: Deactivated successfully. Sep 9 00:48:43.519468 containerd[1540]: time="2025-09-09T00:48:43.518847518Z" level=info msg="TearDown network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\" successfully" Sep 9 00:48:43.519468 containerd[1540]: time="2025-09-09T00:48:43.518892538Z" level=info msg="StopPodSandbox for \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\" returns successfully" Sep 9 00:48:43.630298 kubelet[2738]: I0909 00:48:43.630252 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdcee72-ae1b-4328-a99d-e56db99df127-whisker-ca-bundle\") pod \"9bdcee72-ae1b-4328-a99d-e56db99df127\" (UID: \"9bdcee72-ae1b-4328-a99d-e56db99df127\") " Sep 9 00:48:43.631574 kubelet[2738]: I0909 00:48:43.630327 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2jr4\" (UniqueName: \"kubernetes.io/projected/9bdcee72-ae1b-4328-a99d-e56db99df127-kube-api-access-p2jr4\") pod \"9bdcee72-ae1b-4328-a99d-e56db99df127\" (UID: \"9bdcee72-ae1b-4328-a99d-e56db99df127\") " Sep 9 00:48:43.631574 kubelet[2738]: I0909 00:48:43.630352 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bdcee72-ae1b-4328-a99d-e56db99df127-whisker-backend-key-pair\") pod \"9bdcee72-ae1b-4328-a99d-e56db99df127\" (UID: \"9bdcee72-ae1b-4328-a99d-e56db99df127\") " Sep 9 00:48:43.655247 kubelet[2738]: I0909 00:48:43.651463 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bdcee72-ae1b-4328-a99d-e56db99df127-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9bdcee72-ae1b-4328-a99d-e56db99df127" (UID: "9bdcee72-ae1b-4328-a99d-e56db99df127"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:48:43.662345 systemd[1]: var-lib-kubelet-pods-9bdcee72\x2dae1b\x2d4328\x2da99d\x2de56db99df127-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2jr4.mount: Deactivated successfully. Sep 9 00:48:43.662427 systemd[1]: var-lib-kubelet-pods-9bdcee72\x2dae1b\x2d4328\x2da99d\x2de56db99df127-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:48:43.664634 kubelet[2738]: I0909 00:48:43.663226 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bdcee72-ae1b-4328-a99d-e56db99df127-kube-api-access-p2jr4" (OuterVolumeSpecName: "kube-api-access-p2jr4") pod "9bdcee72-ae1b-4328-a99d-e56db99df127" (UID: "9bdcee72-ae1b-4328-a99d-e56db99df127"). InnerVolumeSpecName "kube-api-access-p2jr4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:43.664634 kubelet[2738]: I0909 00:48:43.663295 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bdcee72-ae1b-4328-a99d-e56db99df127-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9bdcee72-ae1b-4328-a99d-e56db99df127" (UID: "9bdcee72-ae1b-4328-a99d-e56db99df127"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:48:43.703028 systemd[1]: Removed slice kubepods-besteffort-pod9bdcee72_ae1b_4328_a99d_e56db99df127.slice - libcontainer container kubepods-besteffort-pod9bdcee72_ae1b_4328_a99d_e56db99df127.slice. Sep 9 00:48:43.731456 kubelet[2738]: I0909 00:48:43.731422 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bdcee72-ae1b-4328-a99d-e56db99df127-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:43.731456 kubelet[2738]: I0909 00:48:43.731448 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdcee72-ae1b-4328-a99d-e56db99df127-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:43.731456 kubelet[2738]: I0909 00:48:43.731455 2738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p2jr4\" (UniqueName: \"kubernetes.io/projected/9bdcee72-ae1b-4328-a99d-e56db99df127-kube-api-access-p2jr4\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:44.109583 systemd[1]: Created slice kubepods-besteffort-pod0fb0d9bd_9f48_4ad3_a96a_77b79f2fd2b1.slice - libcontainer container kubepods-besteffort-pod0fb0d9bd_9f48_4ad3_a96a_77b79f2fd2b1.slice. Sep 9 00:48:44.244409 kubelet[2738]: I0909 00:48:44.244372 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1-whisker-ca-bundle\") pod \"whisker-6784fb975b-r4r6w\" (UID: \"0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1\") " pod="calico-system/whisker-6784fb975b-r4r6w" Sep 9 00:48:44.244582 kubelet[2738]: I0909 00:48:44.244444 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1-whisker-backend-key-pair\") pod \"whisker-6784fb975b-r4r6w\" (UID: \"0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1\") " pod="calico-system/whisker-6784fb975b-r4r6w" Sep 9 00:48:44.244582 kubelet[2738]: I0909 00:48:44.244468 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd9x2\" (UniqueName: \"kubernetes.io/projected/0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1-kube-api-access-hd9x2\") pod \"whisker-6784fb975b-r4r6w\" (UID: \"0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1\") " pod="calico-system/whisker-6784fb975b-r4r6w" Sep 9 00:48:44.412561 containerd[1540]: time="2025-09-09T00:48:44.412194937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6784fb975b-r4r6w,Uid:0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1,Namespace:calico-system,Attempt:0,}" Sep 9 00:48:44.533812 systemd-networkd[1441]: calid280b28ff3b: Link UP Sep 9 00:48:44.544691 systemd-networkd[1441]: calid280b28ff3b: Gained carrier Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.442 [INFO][4038] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.454 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6784fb975b--r4r6w-eth0 whisker-6784fb975b- calico-system 0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1 894 0 2025-09-09 00:48:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6784fb975b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6784fb975b-r4r6w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid280b28ff3b [] [] }} ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Namespace="calico-system" Pod="whisker-6784fb975b-r4r6w" WorkloadEndpoint="localhost-k8s-whisker--6784fb975b--r4r6w-" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.454 [INFO][4038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Namespace="calico-system" Pod="whisker-6784fb975b-r4r6w" WorkloadEndpoint="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.475 [INFO][4050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" HandleID="k8s-pod-network.630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Workload="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.475 [INFO][4050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" HandleID="k8s-pod-network.630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Workload="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6784fb975b-r4r6w", "timestamp":"2025-09-09 00:48:44.475802328 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.475 [INFO][4050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.475 [INFO][4050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.476 [INFO][4050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.482 [INFO][4050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" host="localhost" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.495 [INFO][4050] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.499 [INFO][4050] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.500 [INFO][4050] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.501 [INFO][4050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.501 [INFO][4050] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" host="localhost" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.502 [INFO][4050] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.505 [INFO][4050] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" host="localhost" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.509 [INFO][4050] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" host="localhost" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.511 [INFO][4050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" host="localhost" Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.511 [INFO][4050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:44.570049 containerd[1540]: 2025-09-09 00:48:44.511 [INFO][4050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" HandleID="k8s-pod-network.630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Workload="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" Sep 9 00:48:44.571454 containerd[1540]: 2025-09-09 00:48:44.515 [INFO][4038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Namespace="calico-system" Pod="whisker-6784fb975b-r4r6w" WorkloadEndpoint="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6784fb975b--r4r6w-eth0", GenerateName:"whisker-6784fb975b-", Namespace:"calico-system", SelfLink:"", UID:"0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6784fb975b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6784fb975b-r4r6w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid280b28ff3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:44.571454 containerd[1540]: 2025-09-09 00:48:44.515 [INFO][4038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Namespace="calico-system" Pod="whisker-6784fb975b-r4r6w" WorkloadEndpoint="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" Sep 9 00:48:44.571454 containerd[1540]: 2025-09-09 00:48:44.515 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid280b28ff3b ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Namespace="calico-system" Pod="whisker-6784fb975b-r4r6w" WorkloadEndpoint="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" Sep 9 00:48:44.571454 containerd[1540]: 2025-09-09 00:48:44.537 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Namespace="calico-system" Pod="whisker-6784fb975b-r4r6w" WorkloadEndpoint="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" Sep 9 00:48:44.571454 containerd[1540]: 2025-09-09 00:48:44.537 [INFO][4038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Namespace="calico-system" Pod="whisker-6784fb975b-r4r6w" WorkloadEndpoint="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6784fb975b--r4r6w-eth0", GenerateName:"whisker-6784fb975b-", Namespace:"calico-system", SelfLink:"", UID:"0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6784fb975b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef", Pod:"whisker-6784fb975b-r4r6w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid280b28ff3b", MAC:"d2:f4:f6:e8:07:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:44.571454 containerd[1540]: 2025-09-09 00:48:44.565 [INFO][4038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef" Namespace="calico-system" Pod="whisker-6784fb975b-r4r6w" WorkloadEndpoint="localhost-k8s-whisker--6784fb975b--r4r6w-eth0" Sep 9 00:48:44.607813 containerd[1540]: time="2025-09-09T00:48:44.604388805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:44.607813 containerd[1540]: time="2025-09-09T00:48:44.604438653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:44.607813 containerd[1540]: time="2025-09-09T00:48:44.604447183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:44.607813 containerd[1540]: time="2025-09-09T00:48:44.604504099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:44.630249 systemd[1]: Started cri-containerd-630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef.scope - libcontainer container 630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef. Sep 9 00:48:44.650186 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:44.697473 containerd[1540]: time="2025-09-09T00:48:44.696132954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6784fb975b-r4r6w,Uid:0fb0d9bd-9f48-4ad3-a96a-77b79f2fd2b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef\"" Sep 9 00:48:44.705266 containerd[1540]: time="2025-09-09T00:48:44.705064296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:48:45.629152 kubelet[2738]: I0909 00:48:45.629041 2738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bdcee72-ae1b-4328-a99d-e56db99df127" path="/var/lib/kubelet/pods/9bdcee72-ae1b-4328-a99d-e56db99df127/volumes" Sep 9 00:48:45.784129 systemd-networkd[1441]: calid280b28ff3b: Gained IPv6LL Sep 9 00:48:46.523755 containerd[1540]: time="2025-09-09T00:48:46.523217566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:46.523755 containerd[1540]: time="2025-09-09T00:48:46.523627506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:48:46.523755 containerd[1540]: time="2025-09-09T00:48:46.523719340Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:46.525020 containerd[1540]: time="2025-09-09T00:48:46.524983866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:46.525703 containerd[1540]: time="2025-09-09T00:48:46.525429058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.820335786s" Sep 9 00:48:46.525703 containerd[1540]: time="2025-09-09T00:48:46.525448067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:48:46.528233 containerd[1540]: time="2025-09-09T00:48:46.528211123Z" level=info msg="CreateContainer within sandbox \"630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:48:46.532995 containerd[1540]: time="2025-09-09T00:48:46.532905351Z" level=info msg="CreateContainer within sandbox \"630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f67d460186bb9ac1ae13708c1c9422d05dc19e04291b89c1490d398e58447be0\"" Sep 9 00:48:46.535183 containerd[1540]: time="2025-09-09T00:48:46.535163692Z" level=info msg="StartContainer for \"f67d460186bb9ac1ae13708c1c9422d05dc19e04291b89c1490d398e58447be0\"" Sep 9 00:48:46.562177 systemd[1]: Started cri-containerd-f67d460186bb9ac1ae13708c1c9422d05dc19e04291b89c1490d398e58447be0.scope - libcontainer container f67d460186bb9ac1ae13708c1c9422d05dc19e04291b89c1490d398e58447be0. Sep 9 00:48:46.599443 containerd[1540]: time="2025-09-09T00:48:46.599377509Z" level=info msg="StartContainer for \"f67d460186bb9ac1ae13708c1c9422d05dc19e04291b89c1490d398e58447be0\" returns successfully" Sep 9 00:48:46.601356 containerd[1540]: time="2025-09-09T00:48:46.601253794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:48:46.627454 containerd[1540]: time="2025-09-09T00:48:46.627295403Z" level=info msg="StopPodSandbox for \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\"" Sep 9 00:48:46.627865 containerd[1540]: time="2025-09-09T00:48:46.627670519Z" level=info msg="StopPodSandbox for \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\"" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.664 [INFO][4199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.664 [INFO][4199] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" iface="eth0" netns="/var/run/netns/cni-f61c333a-20b1-c5b2-e640-8e16ce8f9edc" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.664 [INFO][4199] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" iface="eth0" netns="/var/run/netns/cni-f61c333a-20b1-c5b2-e640-8e16ce8f9edc" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.664 [INFO][4199] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" iface="eth0" netns="/var/run/netns/cni-f61c333a-20b1-c5b2-e640-8e16ce8f9edc" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.664 [INFO][4199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.664 [INFO][4199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.779 [INFO][4214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" HandleID="k8s-pod-network.23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.779 [INFO][4214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.779 [INFO][4214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.783 [WARNING][4214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" HandleID="k8s-pod-network.23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.783 [INFO][4214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" HandleID="k8s-pod-network.23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.784 [INFO][4214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:46.789161 containerd[1540]: 2025-09-09 00:48:46.785 [INFO][4199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:48:46.790747 containerd[1540]: time="2025-09-09T00:48:46.790550168Z" level=info msg="TearDown network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\" successfully" Sep 9 00:48:46.790747 containerd[1540]: time="2025-09-09T00:48:46.790572325Z" level=info msg="StopPodSandbox for \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\" returns successfully" Sep 9 00:48:46.791341 systemd[1]: run-netns-cni\x2df61c333a\x2d20b1\x2dc5b2\x2de640\x2d8e16ce8f9edc.mount: Deactivated successfully. Sep 9 00:48:46.792473 containerd[1540]: time="2025-09-09T00:48:46.792457224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bg69g,Uid:456193ac-db3d-4857-b7e3-7054cee2a893,Namespace:calico-system,Attempt:1,}" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.681 [INFO][4200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.681 [INFO][4200] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" iface="eth0" netns="/var/run/netns/cni-30aa6cfa-4d97-e61e-20ca-cfd6f0a6d3a3" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.682 [INFO][4200] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" iface="eth0" netns="/var/run/netns/cni-30aa6cfa-4d97-e61e-20ca-cfd6f0a6d3a3" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.682 [INFO][4200] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" iface="eth0" netns="/var/run/netns/cni-30aa6cfa-4d97-e61e-20ca-cfd6f0a6d3a3" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.682 [INFO][4200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.683 [INFO][4200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.785 [INFO][4222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" HandleID="k8s-pod-network.7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.785 [INFO][4222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.785 [INFO][4222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.792 [WARNING][4222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" HandleID="k8s-pod-network.7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.792 [INFO][4222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" HandleID="k8s-pod-network.7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.794 [INFO][4222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:46.796863 containerd[1540]: 2025-09-09 00:48:46.795 [INFO][4200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:48:46.800373 containerd[1540]: time="2025-09-09T00:48:46.800079665Z" level=info msg="TearDown network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\" successfully" Sep 9 00:48:46.800373 containerd[1540]: time="2025-09-09T00:48:46.800098943Z" level=info msg="StopPodSandbox for \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\" returns successfully" Sep 9 00:48:46.802054 containerd[1540]: time="2025-09-09T00:48:46.802033298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774984f779-zprwx,Uid:d663ab63-4876-4bed-b10d-08dca1619cbf,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:48:46.881051 systemd-networkd[1441]: caliaea86c7f9b1: Link UP Sep 9 00:48:46.887191 systemd-networkd[1441]: caliaea86c7f9b1: Gained carrier Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.829 [INFO][4241] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.837 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--774984f779--zprwx-eth0 calico-apiserver-774984f779- calico-apiserver d663ab63-4876-4bed-b10d-08dca1619cbf 911 0 2025-09-09 00:48:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:774984f779 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-774984f779-zprwx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaea86c7f9b1 [] [] }} ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-zprwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--zprwx-" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.837 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-zprwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.854 [INFO][4256] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" HandleID="k8s-pod-network.5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.854 [INFO][4256] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" HandleID="k8s-pod-network.5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-774984f779-zprwx", "timestamp":"2025-09-09 00:48:46.854834735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.855 [INFO][4256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.855 [INFO][4256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.855 [INFO][4256] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.862 [INFO][4256] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" host="localhost" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.864 [INFO][4256] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.867 [INFO][4256] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.868 [INFO][4256] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.869 [INFO][4256] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.869 [INFO][4256] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" host="localhost" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.870 [INFO][4256] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7 Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.873 [INFO][4256] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" host="localhost" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.876 [INFO][4256] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" host="localhost" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.877 [INFO][4256] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" host="localhost" Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.877 [INFO][4256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:46.920228 containerd[1540]: 2025-09-09 00:48:46.877 [INFO][4256] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" HandleID="k8s-pod-network.5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.920669 containerd[1540]: 2025-09-09 00:48:46.878 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-zprwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--774984f779--zprwx-eth0", GenerateName:"calico-apiserver-774984f779-", Namespace:"calico-apiserver", SelfLink:"", UID:"d663ab63-4876-4bed-b10d-08dca1619cbf", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774984f779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-774984f779-zprwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaea86c7f9b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:46.920669 containerd[1540]: 2025-09-09 00:48:46.878 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-zprwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.920669 containerd[1540]: 2025-09-09 00:48:46.878 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaea86c7f9b1 ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-zprwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.920669 containerd[1540]: 2025-09-09 00:48:46.887 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-zprwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.920669 containerd[1540]: 2025-09-09 00:48:46.888 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-zprwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--774984f779--zprwx-eth0", GenerateName:"calico-apiserver-774984f779-", Namespace:"calico-apiserver", SelfLink:"", UID:"d663ab63-4876-4bed-b10d-08dca1619cbf", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774984f779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7", Pod:"calico-apiserver-774984f779-zprwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaea86c7f9b1", MAC:"ce:23:c5:12:b7:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:46.920669 containerd[1540]: 2025-09-09 00:48:46.918 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-zprwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:48:46.938383 containerd[1540]: time="2025-09-09T00:48:46.938148362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:46.938383 containerd[1540]: time="2025-09-09T00:48:46.938201088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:46.938383 containerd[1540]: time="2025-09-09T00:48:46.938215242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:46.938383 containerd[1540]: time="2025-09-09T00:48:46.938293719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:46.956127 systemd[1]: Started cri-containerd-5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7.scope - libcontainer container 5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7. Sep 9 00:48:46.967187 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:46.989718 systemd-networkd[1441]: calie39c3faf822: Link UP Sep 9 00:48:46.990338 systemd-networkd[1441]: calie39c3faf822: Gained carrier Sep 9 00:48:47.004244 containerd[1540]: time="2025-09-09T00:48:47.004114012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774984f779-zprwx,Uid:d663ab63-4876-4bed-b10d-08dca1619cbf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7\"" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.822 [INFO][4232] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.832 [INFO][4232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--bg69g-eth0 goldmane-54d579b49d- calico-system 456193ac-db3d-4857-b7e3-7054cee2a893 910 0 2025-09-09 00:48:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-bg69g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie39c3faf822 [] [] }} ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Namespace="calico-system" Pod="goldmane-54d579b49d-bg69g" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bg69g-" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.832 [INFO][4232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Namespace="calico-system" Pod="goldmane-54d579b49d-bg69g" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.860 [INFO][4258] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" HandleID="k8s-pod-network.cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.861 [INFO][4258] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" HandleID="k8s-pod-network.cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-bg69g", "timestamp":"2025-09-09 00:48:46.860757207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.861 [INFO][4258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.877 [INFO][4258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.878 [INFO][4258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.962 [INFO][4258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" host="localhost" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.967 [INFO][4258] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.971 [INFO][4258] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.972 [INFO][4258] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.974 [INFO][4258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.974 [INFO][4258] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" host="localhost" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.976 [INFO][4258] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.981 [INFO][4258] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" host="localhost" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.984 [INFO][4258] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" host="localhost" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.984 [INFO][4258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" host="localhost" Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.984 [INFO][4258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:47.009643 containerd[1540]: 2025-09-09 00:48:46.984 [INFO][4258] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" HandleID="k8s-pod-network.cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:47.010821 containerd[1540]: 2025-09-09 00:48:46.986 [INFO][4232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Namespace="calico-system" Pod="goldmane-54d579b49d-bg69g" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bg69g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"456193ac-db3d-4857-b7e3-7054cee2a893", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-bg69g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie39c3faf822", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:47.010821 containerd[1540]: 2025-09-09 00:48:46.986 [INFO][4232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Namespace="calico-system" Pod="goldmane-54d579b49d-bg69g" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:47.010821 containerd[1540]: 2025-09-09 00:48:46.986 [INFO][4232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie39c3faf822 ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Namespace="calico-system" Pod="goldmane-54d579b49d-bg69g" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:47.010821 containerd[1540]: 2025-09-09 00:48:46.991 [INFO][4232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Namespace="calico-system" Pod="goldmane-54d579b49d-bg69g" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:47.010821 containerd[1540]: 2025-09-09 00:48:46.991 [INFO][4232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Namespace="calico-system" Pod="goldmane-54d579b49d-bg69g" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bg69g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"456193ac-db3d-4857-b7e3-7054cee2a893", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f", Pod:"goldmane-54d579b49d-bg69g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie39c3faf822", MAC:"ca:07:a8:95:a7:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:47.010821 containerd[1540]: 2025-09-09 00:48:47.005 [INFO][4232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f" Namespace="calico-system" Pod="goldmane-54d579b49d-bg69g" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:48:47.029337 containerd[1540]: time="2025-09-09T00:48:47.029231020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:47.029337 containerd[1540]: time="2025-09-09T00:48:47.029271838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:47.029633 containerd[1540]: time="2025-09-09T00:48:47.029294013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:47.029633 containerd[1540]: time="2025-09-09T00:48:47.029418178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:47.042093 systemd[1]: Started cri-containerd-cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f.scope - libcontainer container cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f. Sep 9 00:48:47.050687 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:47.071175 containerd[1540]: time="2025-09-09T00:48:47.071106093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bg69g,Uid:456193ac-db3d-4857-b7e3-7054cee2a893,Namespace:calico-system,Attempt:1,} returns sandbox id \"cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f\"" Sep 9 00:48:47.535187 systemd[1]: run-netns-cni\x2d30aa6cfa\x2d4d97\x2de61e\x2d20ca\x2dcfd6f0a6d3a3.mount: Deactivated successfully. Sep 9 00:48:48.024131 systemd-networkd[1441]: caliaea86c7f9b1: Gained IPv6LL Sep 9 00:48:48.628794 containerd[1540]: time="2025-09-09T00:48:48.628518283Z" level=info msg="StopPodSandbox for \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\"" Sep 9 00:48:48.744140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079833068.mount: Deactivated successfully. Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.722 [INFO][4420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.722 [INFO][4420] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" iface="eth0" netns="/var/run/netns/cni-0b53c7dc-f7ab-aa85-be9a-61841849d903" Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.723 [INFO][4420] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" iface="eth0" netns="/var/run/netns/cni-0b53c7dc-f7ab-aa85-be9a-61841849d903" Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.723 [INFO][4420] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" iface="eth0" netns="/var/run/netns/cni-0b53c7dc-f7ab-aa85-be9a-61841849d903" Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.723 [INFO][4420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.723 [INFO][4420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.744 [INFO][4427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" HandleID="k8s-pod-network.17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.746 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.746 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.751 [WARNING][4427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" HandleID="k8s-pod-network.17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.751 [INFO][4427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" HandleID="k8s-pod-network.17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.752 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:48.756034 containerd[1540]: 2025-09-09 00:48:48.753 [INFO][4420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:48:48.757966 containerd[1540]: time="2025-09-09T00:48:48.756383321Z" level=info msg="TearDown network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\" successfully" Sep 9 00:48:48.757966 containerd[1540]: time="2025-09-09T00:48:48.756402882Z" level=info msg="StopPodSandbox for \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\" returns successfully" Sep 9 00:48:48.757966 containerd[1540]: time="2025-09-09T00:48:48.757116297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zmwvb,Uid:c0dc5384-6a49-429a-b83f-4f8249484a53,Namespace:kube-system,Attempt:1,}" Sep 9 00:48:48.757604 systemd[1]: run-netns-cni\x2d0b53c7dc\x2df7ab\x2daa85\x2dbe9a\x2d61841849d903.mount: Deactivated successfully. Sep 9 00:48:48.762038 containerd[1540]: time="2025-09-09T00:48:48.761866666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:48.762772 containerd[1540]: time="2025-09-09T00:48:48.762743421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:48:48.764247 containerd[1540]: time="2025-09-09T00:48:48.764217047Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:48.769241 containerd[1540]: time="2025-09-09T00:48:48.767791698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:48.769241 containerd[1540]: time="2025-09-09T00:48:48.769128272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.167852042s" Sep 9 00:48:48.769241 containerd[1540]: time="2025-09-09T00:48:48.769161770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:48:48.774160 containerd[1540]: time="2025-09-09T00:48:48.774133182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:48:48.788138 containerd[1540]: time="2025-09-09T00:48:48.788105767Z" level=info msg="CreateContainer within sandbox \"630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:48:48.797220 containerd[1540]: time="2025-09-09T00:48:48.797196898Z" level=info msg="CreateContainer within sandbox \"630c07f30351c0f2e27f23c2041ff12934c8af1865b9318febd63f7d6e09d6ef\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e1cfcfb094d6c190016a7d8b2b55247bca130ed0476e8815a51bd8d960e460c2\"" Sep 9 00:48:48.797950 containerd[1540]: time="2025-09-09T00:48:48.797932602Z" level=info msg="StartContainer for \"e1cfcfb094d6c190016a7d8b2b55247bca130ed0476e8815a51bd8d960e460c2\"" Sep 9 00:48:48.825309 systemd[1]: Started cri-containerd-e1cfcfb094d6c190016a7d8b2b55247bca130ed0476e8815a51bd8d960e460c2.scope - libcontainer container e1cfcfb094d6c190016a7d8b2b55247bca130ed0476e8815a51bd8d960e460c2. Sep 9 00:48:48.888285 containerd[1540]: time="2025-09-09T00:48:48.888214297Z" level=info msg="StartContainer for \"e1cfcfb094d6c190016a7d8b2b55247bca130ed0476e8815a51bd8d960e460c2\" returns successfully" Sep 9 00:48:48.899289 systemd-networkd[1441]: cali1cc44314067: Link UP Sep 9 00:48:48.899631 systemd-networkd[1441]: cali1cc44314067: Gained carrier Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.792 [INFO][4437] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.805 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0 coredns-674b8bbfcf- kube-system c0dc5384-6a49-429a-b83f-4f8249484a53 924 0 2025-09-09 00:48:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zmwvb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1cc44314067 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmwvb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmwvb-" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.805 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmwvb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.840 [INFO][4465] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" HandleID="k8s-pod-network.e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.840 [INFO][4465] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" HandleID="k8s-pod-network.e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8580), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zmwvb", "timestamp":"2025-09-09 00:48:48.840780509 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.840 [INFO][4465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.840 [INFO][4465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.840 [INFO][4465] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.846 [INFO][4465] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" host="localhost" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.858 [INFO][4465] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.868 [INFO][4465] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.870 [INFO][4465] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.871 [INFO][4465] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.871 [INFO][4465] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" host="localhost" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.873 [INFO][4465] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.882 [INFO][4465] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" host="localhost" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.894 [INFO][4465] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" host="localhost" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.894 [INFO][4465] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" host="localhost" Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.894 [INFO][4465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:48.918177 containerd[1540]: 2025-09-09 00:48:48.894 [INFO][4465] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" HandleID="k8s-pod-network.e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.939814 containerd[1540]: 2025-09-09 00:48:48.896 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmwvb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0dc5384-6a49-429a-b83f-4f8249484a53", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zmwvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cc44314067", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:48.939814 containerd[1540]: 2025-09-09 00:48:48.896 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmwvb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.939814 containerd[1540]: 2025-09-09 00:48:48.896 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cc44314067 ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmwvb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.939814 containerd[1540]: 2025-09-09 00:48:48.900 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmwvb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.939814 containerd[1540]: 2025-09-09 00:48:48.900 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmwvb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0dc5384-6a49-429a-b83f-4f8249484a53", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b", Pod:"coredns-674b8bbfcf-zmwvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cc44314067", MAC:"62:3d:28:19:a5:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:48.939814 containerd[1540]: 2025-09-09 00:48:48.915 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmwvb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:48:48.921137 systemd-networkd[1441]: calie39c3faf822: Gained IPv6LL Sep 9 00:48:48.953347 containerd[1540]: time="2025-09-09T00:48:48.951526385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:48.953347 containerd[1540]: time="2025-09-09T00:48:48.951583764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:48.953347 containerd[1540]: time="2025-09-09T00:48:48.951594476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:48.953347 containerd[1540]: time="2025-09-09T00:48:48.951662855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:48.966085 systemd[1]: Started cri-containerd-e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b.scope - libcontainer container e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b. Sep 9 00:48:48.973401 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:48.997863 containerd[1540]: time="2025-09-09T00:48:48.997838078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zmwvb,Uid:c0dc5384-6a49-429a-b83f-4f8249484a53,Namespace:kube-system,Attempt:1,} returns sandbox id \"e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b\"" Sep 9 00:48:49.007369 kubelet[2738]: I0909 00:48:49.007164 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6784fb975b-r4r6w" podStartSLOduration=0.933040031 podStartE2EDuration="5.007152947s" podCreationTimestamp="2025-09-09 00:48:44 +0000 UTC" firstStartedPulling="2025-09-09 00:48:44.699058457 +0000 UTC m=+41.267591735" lastFinishedPulling="2025-09-09 00:48:48.773171373 +0000 UTC m=+45.341704651" observedRunningTime="2025-09-09 00:48:49.006904204 +0000 UTC m=+45.575437486" watchObservedRunningTime="2025-09-09 00:48:49.007152947 +0000 UTC m=+45.575686229" Sep 9 00:48:49.056154 containerd[1540]: time="2025-09-09T00:48:49.056122751Z" level=info msg="CreateContainer within sandbox \"e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:48:49.229139 containerd[1540]: time="2025-09-09T00:48:49.229033708Z" level=info msg="CreateContainer within sandbox \"e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b88815a71b57fad148f2a2f1aaa69c802732ad0e0d15da3d2f8e4db4bc1066b\"" Sep 9 00:48:49.239204 containerd[1540]: time="2025-09-09T00:48:49.239171131Z" level=info msg="StartContainer for \"9b88815a71b57fad148f2a2f1aaa69c802732ad0e0d15da3d2f8e4db4bc1066b\"" Sep 9 00:48:49.258104 systemd[1]: Started cri-containerd-9b88815a71b57fad148f2a2f1aaa69c802732ad0e0d15da3d2f8e4db4bc1066b.scope - libcontainer container 9b88815a71b57fad148f2a2f1aaa69c802732ad0e0d15da3d2f8e4db4bc1066b. Sep 9 00:48:49.282500 containerd[1540]: time="2025-09-09T00:48:49.282475619Z" level=info msg="StartContainer for \"9b88815a71b57fad148f2a2f1aaa69c802732ad0e0d15da3d2f8e4db4bc1066b\" returns successfully" Sep 9 00:48:49.629801 containerd[1540]: time="2025-09-09T00:48:49.628744112Z" level=info msg="StopPodSandbox for \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\"" Sep 9 00:48:49.630464 containerd[1540]: time="2025-09-09T00:48:49.630056456Z" level=info msg="StopPodSandbox for \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\"" Sep 9 00:48:49.631376 containerd[1540]: time="2025-09-09T00:48:49.631357827Z" level=info msg="StopPodSandbox for \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\"" Sep 9 00:48:49.633959 containerd[1540]: time="2025-09-09T00:48:49.633349442Z" level=info msg="StopPodSandbox for \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\"" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.708 [INFO][4635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.710 [INFO][4635] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" iface="eth0" netns="/var/run/netns/cni-26392a01-230b-4492-5b08-d83782639504" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.710 [INFO][4635] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" iface="eth0" netns="/var/run/netns/cni-26392a01-230b-4492-5b08-d83782639504" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.711 [INFO][4635] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" iface="eth0" netns="/var/run/netns/cni-26392a01-230b-4492-5b08-d83782639504" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.711 [INFO][4635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.711 [INFO][4635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.775 [INFO][4657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" HandleID="k8s-pod-network.cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.779 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.779 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.791 [WARNING][4657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" HandleID="k8s-pod-network.cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.791 [INFO][4657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" HandleID="k8s-pod-network.cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.793 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:49.798429 containerd[1540]: 2025-09-09 00:48:49.795 [INFO][4635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:48:49.815181 containerd[1540]: time="2025-09-09T00:48:49.800152501Z" level=info msg="TearDown network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\" successfully" Sep 9 00:48:49.815181 containerd[1540]: time="2025-09-09T00:48:49.800175869Z" level=info msg="StopPodSandbox for \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\" returns successfully" Sep 9 00:48:49.815181 containerd[1540]: time="2025-09-09T00:48:49.803863815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cc69d998f-tp8b8,Uid:cfbbe3a7-b775-4416-b316-762d53f81c5d,Namespace:calico-system,Attempt:1,}" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.722 [INFO][4623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.722 [INFO][4623] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" iface="eth0" netns="/var/run/netns/cni-dd233191-18fe-eb38-b655-572cb1e11bba" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.723 [INFO][4623] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" iface="eth0" netns="/var/run/netns/cni-dd233191-18fe-eb38-b655-572cb1e11bba" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.724 [INFO][4623] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" iface="eth0" netns="/var/run/netns/cni-dd233191-18fe-eb38-b655-572cb1e11bba" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.724 [INFO][4623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.724 [INFO][4623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.783 [INFO][4659] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" HandleID="k8s-pod-network.2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.783 [INFO][4659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.795 [INFO][4659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.803 [WARNING][4659] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" HandleID="k8s-pod-network.2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.803 [INFO][4659] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" HandleID="k8s-pod-network.2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.805 [INFO][4659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:49.815181 containerd[1540]: 2025-09-09 00:48:49.807 [INFO][4623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:48:49.815181 containerd[1540]: time="2025-09-09T00:48:49.811066970Z" level=info msg="TearDown network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\" successfully" Sep 9 00:48:49.815181 containerd[1540]: time="2025-09-09T00:48:49.811099363Z" level=info msg="StopPodSandbox for \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\" returns successfully" Sep 9 00:48:49.815181 containerd[1540]: time="2025-09-09T00:48:49.812361734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774984f779-qfzwz,Uid:23688cde-a65d-4c6c-9d95-5a6f71851bf6,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:48:49.801482 systemd[1]: run-netns-cni\x2d26392a01\x2d230b\x2d4492\x2d5b08\x2dd83782639504.mount: Deactivated successfully. Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.735 [INFO][4636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.736 [INFO][4636] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" iface="eth0" netns="/var/run/netns/cni-fa38ec8b-3d65-466b-bb1a-306886276f39" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.737 [INFO][4636] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" iface="eth0" netns="/var/run/netns/cni-fa38ec8b-3d65-466b-bb1a-306886276f39" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.738 [INFO][4636] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" iface="eth0" netns="/var/run/netns/cni-fa38ec8b-3d65-466b-bb1a-306886276f39" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.738 [INFO][4636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.738 [INFO][4636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.779 [INFO][4668] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" HandleID="k8s-pod-network.63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.785 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.805 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.811 [WARNING][4668] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" HandleID="k8s-pod-network.63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.811 [INFO][4668] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" HandleID="k8s-pod-network.63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.815 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.816 [INFO][4636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:48:49.842685 containerd[1540]: time="2025-09-09T00:48:49.823329957Z" level=info msg="TearDown network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\" successfully" Sep 9 00:48:49.842685 containerd[1540]: time="2025-09-09T00:48:49.823349506Z" level=info msg="StopPodSandbox for \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\" returns successfully" Sep 9 00:48:49.842685 containerd[1540]: time="2025-09-09T00:48:49.823852203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7lw2w,Uid:fe7ec415-6d72-4459-b5ae-85006f84662b,Namespace:kube-system,Attempt:1,}" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.750 [INFO][4627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.750 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" iface="eth0" netns="/var/run/netns/cni-f659470a-b00c-3ddf-d83a-3db49cb75024" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.751 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" iface="eth0" netns="/var/run/netns/cni-f659470a-b00c-3ddf-d83a-3db49cb75024" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.755 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" iface="eth0" netns="/var/run/netns/cni-f659470a-b00c-3ddf-d83a-3db49cb75024" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.755 [INFO][4627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.755 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.800 [INFO][4675] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" HandleID="k8s-pod-network.3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.802 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.815 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.824 [WARNING][4675] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" HandleID="k8s-pod-network.3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.824 [INFO][4675] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" HandleID="k8s-pod-network.3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.825 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:49.842685 containerd[1540]: 2025-09-09 00:48:49.826 [INFO][4627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:48:49.812735 systemd[1]: run-netns-cni\x2ddd233191\x2d18fe\x2deb38\x2db655\x2d572cb1e11bba.mount: Deactivated successfully. Sep 9 00:48:49.898171 containerd[1540]: time="2025-09-09T00:48:49.833348278Z" level=info msg="TearDown network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\" successfully" Sep 9 00:48:49.898171 containerd[1540]: time="2025-09-09T00:48:49.833368418Z" level=info msg="StopPodSandbox for \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\" returns successfully" Sep 9 00:48:49.898171 containerd[1540]: time="2025-09-09T00:48:49.835411934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6kb7s,Uid:09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5,Namespace:calico-system,Attempt:1,}" Sep 9 00:48:49.821634 systemd[1]: run-netns-cni\x2dfa38ec8b\x2d3d65\x2d466b\x2dbb1a\x2d306886276f39.mount: Deactivated successfully. Sep 9 00:48:49.834853 systemd[1]: run-netns-cni\x2df659470a\x2db00c\x2d3ddf\x2dd83a\x2d3db49cb75024.mount: Deactivated successfully. Sep 9 00:48:50.131352 systemd-networkd[1441]: cali4fd57c644ad: Link UP Sep 9 00:48:50.132334 systemd-networkd[1441]: cali4fd57c644ad: Gained carrier Sep 9 00:48:50.165384 kubelet[2738]: I0909 00:48:50.165260 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zmwvb" podStartSLOduration=41.165248484 podStartE2EDuration="41.165248484s" podCreationTimestamp="2025-09-09 00:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:48:50.013642692 +0000 UTC m=+46.582175976" watchObservedRunningTime="2025-09-09 00:48:50.165248484 +0000 UTC m=+46.733781773" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.052 [INFO][4690] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.058 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0 calico-kube-controllers-7cc69d998f- calico-system cfbbe3a7-b775-4416-b316-762d53f81c5d 945 0 2025-09-09 00:48:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cc69d998f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cc69d998f-tp8b8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4fd57c644ad [] [] }} ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Namespace="calico-system" Pod="calico-kube-controllers-7cc69d998f-tp8b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.058 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Namespace="calico-system" Pod="calico-kube-controllers-7cc69d998f-tp8b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.075 [INFO][4702] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" HandleID="k8s-pod-network.74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.075 [INFO][4702] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" HandleID="k8s-pod-network.74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cc69d998f-tp8b8", "timestamp":"2025-09-09 00:48:50.075176011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.075 [INFO][4702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.075 [INFO][4702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.075 [INFO][4702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.099 [INFO][4702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" host="localhost" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.102 [INFO][4702] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.106 [INFO][4702] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.107 [INFO][4702] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.109 [INFO][4702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.109 [INFO][4702] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" host="localhost" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.110 [INFO][4702] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998 Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.115 [INFO][4702] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" host="localhost" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.126 [INFO][4702] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" host="localhost" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.126 [INFO][4702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" host="localhost" Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.126 [INFO][4702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:50.172237 containerd[1540]: 2025-09-09 00:48:50.126 [INFO][4702] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" HandleID="k8s-pod-network.74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:50.192200 containerd[1540]: 2025-09-09 00:48:50.129 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Namespace="calico-system" Pod="calico-kube-controllers-7cc69d998f-tp8b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0", GenerateName:"calico-kube-controllers-7cc69d998f-", Namespace:"calico-system", SelfLink:"", UID:"cfbbe3a7-b775-4416-b316-762d53f81c5d", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cc69d998f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cc69d998f-tp8b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fd57c644ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:50.192200 containerd[1540]: 2025-09-09 00:48:50.129 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Namespace="calico-system" Pod="calico-kube-controllers-7cc69d998f-tp8b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:50.192200 containerd[1540]: 2025-09-09 00:48:50.130 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fd57c644ad ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Namespace="calico-system" Pod="calico-kube-controllers-7cc69d998f-tp8b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:50.192200 containerd[1540]: 2025-09-09 00:48:50.131 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Namespace="calico-system" Pod="calico-kube-controllers-7cc69d998f-tp8b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:50.192200 containerd[1540]: 2025-09-09 00:48:50.132 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Namespace="calico-system" Pod="calico-kube-controllers-7cc69d998f-tp8b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0", GenerateName:"calico-kube-controllers-7cc69d998f-", Namespace:"calico-system", SelfLink:"", UID:"cfbbe3a7-b775-4416-b316-762d53f81c5d", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cc69d998f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998", Pod:"calico-kube-controllers-7cc69d998f-tp8b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fd57c644ad", MAC:"f6:4c:cc:d8:05:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:50.192200 containerd[1540]: 2025-09-09 00:48:50.166 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998" Namespace="calico-system" Pod="calico-kube-controllers-7cc69d998f-tp8b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:48:50.200316 systemd-networkd[1441]: cali1cc44314067: Gained IPv6LL Sep 9 00:48:50.251349 containerd[1540]: time="2025-09-09T00:48:50.250941791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:50.251477 containerd[1540]: time="2025-09-09T00:48:50.251321889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:50.251527 containerd[1540]: time="2025-09-09T00:48:50.251483870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:50.252616 containerd[1540]: time="2025-09-09T00:48:50.252510408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:50.253626 systemd-networkd[1441]: cali7b9a42bcfa5: Link UP Sep 9 00:48:50.254273 systemd-networkd[1441]: cali7b9a42bcfa5: Gained carrier Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.118 [INFO][4710] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.128 [INFO][4710] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0 calico-apiserver-774984f779- calico-apiserver 23688cde-a65d-4c6c-9d95-5a6f71851bf6 946 0 2025-09-09 00:48:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:774984f779 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-774984f779-qfzwz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b9a42bcfa5 [] [] }} ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-qfzwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--qfzwz-" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.128 [INFO][4710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-qfzwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.164 [INFO][4732] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" HandleID="k8s-pod-network.466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.168 [INFO][4732] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" HandleID="k8s-pod-network.466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-774984f779-qfzwz", "timestamp":"2025-09-09 00:48:50.164447017 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.168 [INFO][4732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.168 [INFO][4732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.168 [INFO][4732] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.200 [INFO][4732] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" host="localhost" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.207 [INFO][4732] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.215 [INFO][4732] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.219 [INFO][4732] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.221 [INFO][4732] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.221 [INFO][4732] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" host="localhost" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.221 [INFO][4732] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7 Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.227 [INFO][4732] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" host="localhost" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.246 [INFO][4732] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" host="localhost" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.246 [INFO][4732] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" host="localhost" Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.246 [INFO][4732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:50.277014 containerd[1540]: 2025-09-09 00:48:50.246 [INFO][4732] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" HandleID="k8s-pod-network.466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:50.277433 containerd[1540]: 2025-09-09 00:48:50.251 [INFO][4710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-qfzwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0", GenerateName:"calico-apiserver-774984f779-", Namespace:"calico-apiserver", SelfLink:"", UID:"23688cde-a65d-4c6c-9d95-5a6f71851bf6", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774984f779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-774984f779-qfzwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b9a42bcfa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:50.277433 containerd[1540]: 2025-09-09 00:48:50.251 [INFO][4710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-qfzwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:50.277433 containerd[1540]: 2025-09-09 00:48:50.251 [INFO][4710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b9a42bcfa5 ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-qfzwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:50.277433 containerd[1540]: 2025-09-09 00:48:50.254 [INFO][4710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-qfzwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:50.277433 containerd[1540]: 2025-09-09 00:48:50.254 [INFO][4710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-qfzwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0", GenerateName:"calico-apiserver-774984f779-", Namespace:"calico-apiserver", SelfLink:"", UID:"23688cde-a65d-4c6c-9d95-5a6f71851bf6", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774984f779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7", Pod:"calico-apiserver-774984f779-qfzwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b9a42bcfa5", MAC:"5a:23:f6:af:8e:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:50.277433 containerd[1540]: 2025-09-09 00:48:50.275 [INFO][4710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7" Namespace="calico-apiserver" Pod="calico-apiserver-774984f779-qfzwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:48:50.304133 systemd[1]: Started cri-containerd-74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998.scope - libcontainer container 74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998. Sep 9 00:48:50.319274 containerd[1540]: time="2025-09-09T00:48:50.318950853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:50.319274 containerd[1540]: time="2025-09-09T00:48:50.319010538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:50.319274 containerd[1540]: time="2025-09-09T00:48:50.319034792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:50.319274 containerd[1540]: time="2025-09-09T00:48:50.319110370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:50.325778 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:50.339115 systemd[1]: Started cri-containerd-466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7.scope - libcontainer container 466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7. Sep 9 00:48:50.346000 systemd-networkd[1441]: calidc32bf4379b: Link UP Sep 9 00:48:50.346356 systemd-networkd[1441]: calidc32bf4379b: Gained carrier Sep 9 00:48:50.363792 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:50.377265 containerd[1540]: time="2025-09-09T00:48:50.377165262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cc69d998f-tp8b8,Uid:cfbbe3a7-b775-4416-b316-762d53f81c5d,Namespace:calico-system,Attempt:1,} returns sandbox id \"74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998\"" Sep 9 00:48:50.399146 containerd[1540]: time="2025-09-09T00:48:50.398951997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-774984f779-qfzwz,Uid:23688cde-a65d-4c6c-9d95-5a6f71851bf6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7\"" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.160 [INFO][4727] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.173 [INFO][4727] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0 coredns-674b8bbfcf- kube-system fe7ec415-6d72-4459-b5ae-85006f84662b 947 0 2025-09-09 00:48:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7lw2w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidc32bf4379b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Namespace="kube-system" Pod="coredns-674b8bbfcf-7lw2w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7lw2w-" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.173 [INFO][4727] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Namespace="kube-system" Pod="coredns-674b8bbfcf-7lw2w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.201 [INFO][4763] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" HandleID="k8s-pod-network.f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.201 [INFO][4763] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" HandleID="k8s-pod-network.f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad6a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7lw2w", "timestamp":"2025-09-09 00:48:50.201317688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.201 [INFO][4763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.246 [INFO][4763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.246 [INFO][4763] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.300 [INFO][4763] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" host="localhost" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.306 [INFO][4763] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.314 [INFO][4763] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.315 [INFO][4763] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.318 [INFO][4763] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.318 [INFO][4763] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" host="localhost" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.319 [INFO][4763] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124 Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.326 [INFO][4763] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" host="localhost" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.338 [INFO][4763] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" host="localhost" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.338 [INFO][4763] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" host="localhost" Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.338 [INFO][4763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:50.399146 containerd[1540]: 2025-09-09 00:48:50.338 [INFO][4763] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" HandleID="k8s-pod-network.f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:50.399624 containerd[1540]: 2025-09-09 00:48:50.343 [INFO][4727] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Namespace="kube-system" Pod="coredns-674b8bbfcf-7lw2w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe7ec415-6d72-4459-b5ae-85006f84662b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7lw2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc32bf4379b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:50.399624 containerd[1540]: 2025-09-09 00:48:50.343 [INFO][4727] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Namespace="kube-system" Pod="coredns-674b8bbfcf-7lw2w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:50.399624 containerd[1540]: 2025-09-09 00:48:50.343 [INFO][4727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc32bf4379b ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Namespace="kube-system" Pod="coredns-674b8bbfcf-7lw2w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:50.399624 containerd[1540]: 2025-09-09 00:48:50.348 [INFO][4727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Namespace="kube-system" Pod="coredns-674b8bbfcf-7lw2w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:50.399624 containerd[1540]: 2025-09-09 00:48:50.350 [INFO][4727] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Namespace="kube-system" Pod="coredns-674b8bbfcf-7lw2w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe7ec415-6d72-4459-b5ae-85006f84662b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124", Pod:"coredns-674b8bbfcf-7lw2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc32bf4379b", MAC:"3e:f7:3e:28:a6:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:50.399624 containerd[1540]: 2025-09-09 00:48:50.391 [INFO][4727] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124" Namespace="kube-system" Pod="coredns-674b8bbfcf-7lw2w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:48:50.500543 systemd-networkd[1441]: cali85891b56877: Link UP Sep 9 00:48:50.501648 systemd-networkd[1441]: cali85891b56877: Gained carrier Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.179 [INFO][4744] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.188 [INFO][4744] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6kb7s-eth0 csi-node-driver- calico-system 09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5 948 0 2025-09-09 00:48:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6kb7s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali85891b56877 [] [] }} ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Namespace="calico-system" Pod="csi-node-driver-6kb7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6kb7s-" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.188 [INFO][4744] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Namespace="calico-system" Pod="csi-node-driver-6kb7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.234 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" HandleID="k8s-pod-network.df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.234 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" HandleID="k8s-pod-network.df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6kb7s", "timestamp":"2025-09-09 00:48:50.234699646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.234 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.338 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.338 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.402 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" host="localhost" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.408 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.412 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.413 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.414 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.414 [INFO][4770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" host="localhost" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.415 [INFO][4770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158 Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.435 [INFO][4770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" host="localhost" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.477 [INFO][4770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" host="localhost" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.477 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" host="localhost" Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.477 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:48:50.523912 containerd[1540]: 2025-09-09 00:48:50.477 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" HandleID="k8s-pod-network.df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:50.527035 containerd[1540]: 2025-09-09 00:48:50.495 [INFO][4744] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Namespace="calico-system" Pod="csi-node-driver-6kb7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6kb7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6kb7s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6kb7s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85891b56877", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:50.527035 containerd[1540]: 2025-09-09 00:48:50.495 [INFO][4744] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Namespace="calico-system" Pod="csi-node-driver-6kb7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:50.527035 containerd[1540]: 2025-09-09 00:48:50.495 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85891b56877 ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Namespace="calico-system" Pod="csi-node-driver-6kb7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:50.527035 containerd[1540]: 2025-09-09 00:48:50.503 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Namespace="calico-system" Pod="csi-node-driver-6kb7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:50.527035 containerd[1540]: 2025-09-09 00:48:50.504 [INFO][4744] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Namespace="calico-system" Pod="csi-node-driver-6kb7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6kb7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6kb7s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158", Pod:"csi-node-driver-6kb7s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85891b56877", MAC:"c6:be:84:c5:31:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:48:50.527035 containerd[1540]: 2025-09-09 00:48:50.516 [INFO][4744] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158" Namespace="calico-system" Pod="csi-node-driver-6kb7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:48:50.537841 containerd[1540]: time="2025-09-09T00:48:50.537730661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:50.538009 containerd[1540]: time="2025-09-09T00:48:50.537780368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:50.538009 containerd[1540]: time="2025-09-09T00:48:50.537793048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:50.538009 containerd[1540]: time="2025-09-09T00:48:50.537850230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:50.560967 containerd[1540]: time="2025-09-09T00:48:50.560180350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:50.560967 containerd[1540]: time="2025-09-09T00:48:50.560222822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:50.560967 containerd[1540]: time="2025-09-09T00:48:50.560233140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:50.560967 containerd[1540]: time="2025-09-09T00:48:50.560595960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:50.564409 systemd[1]: Started cri-containerd-f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124.scope - libcontainer container f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124. Sep 9 00:48:50.572429 systemd[1]: Started cri-containerd-df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158.scope - libcontainer container df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158. Sep 9 00:48:50.577103 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:50.593170 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:50.608913 containerd[1540]: time="2025-09-09T00:48:50.608709363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6kb7s,Uid:09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5,Namespace:calico-system,Attempt:1,} returns sandbox id \"df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158\"" Sep 9 00:48:50.620744 containerd[1540]: time="2025-09-09T00:48:50.620717251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7lw2w,Uid:fe7ec415-6d72-4459-b5ae-85006f84662b,Namespace:kube-system,Attempt:1,} returns sandbox id \"f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124\"" Sep 9 00:48:50.660860 containerd[1540]: time="2025-09-09T00:48:50.660760109Z" level=info msg="CreateContainer within sandbox \"f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:48:50.755410 containerd[1540]: time="2025-09-09T00:48:50.755204195Z" level=info msg="CreateContainer within sandbox \"f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"407ec8bf6ee7a5238ef30057a4a61456de52aac6e2cf6543185f4a340f638be2\"" Sep 9 00:48:50.756819 containerd[1540]: time="2025-09-09T00:48:50.756145853Z" level=info msg="StartContainer for \"407ec8bf6ee7a5238ef30057a4a61456de52aac6e2cf6543185f4a340f638be2\"" Sep 9 00:48:50.786178 systemd[1]: Started cri-containerd-407ec8bf6ee7a5238ef30057a4a61456de52aac6e2cf6543185f4a340f638be2.scope - libcontainer container 407ec8bf6ee7a5238ef30057a4a61456de52aac6e2cf6543185f4a340f638be2. Sep 9 00:48:50.811951 containerd[1540]: time="2025-09-09T00:48:50.811922636Z" level=info msg="StartContainer for \"407ec8bf6ee7a5238ef30057a4a61456de52aac6e2cf6543185f4a340f638be2\" returns successfully" Sep 9 00:48:51.015031 kubelet[2738]: I0909 00:48:51.014282 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7lw2w" podStartSLOduration=42.014268596 podStartE2EDuration="42.014268596s" podCreationTimestamp="2025-09-09 00:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:48:51.013631462 +0000 UTC m=+47.582164750" watchObservedRunningTime="2025-09-09 00:48:51.014268596 +0000 UTC m=+47.582801879" Sep 9 00:48:51.736183 systemd-networkd[1441]: cali7b9a42bcfa5: Gained IPv6LL Sep 9 00:48:51.736393 systemd-networkd[1441]: cali85891b56877: Gained IPv6LL Sep 9 00:48:51.800110 systemd-networkd[1441]: cali4fd57c644ad: Gained IPv6LL Sep 9 00:48:52.056227 systemd-networkd[1441]: calidc32bf4379b: Gained IPv6LL Sep 9 00:48:52.483802 kubelet[2738]: I0909 00:48:52.483451 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:48:52.691034 containerd[1540]: time="2025-09-09T00:48:52.690775224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:52.693113 containerd[1540]: time="2025-09-09T00:48:52.692594099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:48:52.693960 containerd[1540]: time="2025-09-09T00:48:52.693505425Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:52.695850 containerd[1540]: time="2025-09-09T00:48:52.695577222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:52.697150 containerd[1540]: time="2025-09-09T00:48:52.697021017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.922761595s" Sep 9 00:48:52.697242 containerd[1540]: time="2025-09-09T00:48:52.697153092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:48:52.698691 containerd[1540]: time="2025-09-09T00:48:52.698667121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:48:52.703789 containerd[1540]: time="2025-09-09T00:48:52.703761205Z" level=info msg="CreateContainer within sandbox \"5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:48:52.717435 containerd[1540]: time="2025-09-09T00:48:52.717357030Z" level=info msg="CreateContainer within sandbox \"5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d0724722d3c7b2b41d76ce61b7d954037885bfb47191c4b743450c1e6b6458ef\"" Sep 9 00:48:52.718897 containerd[1540]: time="2025-09-09T00:48:52.718649612Z" level=info msg="StartContainer for \"d0724722d3c7b2b41d76ce61b7d954037885bfb47191c4b743450c1e6b6458ef\"" Sep 9 00:48:52.762163 systemd[1]: Started cri-containerd-d0724722d3c7b2b41d76ce61b7d954037885bfb47191c4b743450c1e6b6458ef.scope - libcontainer container d0724722d3c7b2b41d76ce61b7d954037885bfb47191c4b743450c1e6b6458ef. Sep 9 00:48:52.845378 containerd[1540]: time="2025-09-09T00:48:52.844780740Z" level=info msg="StartContainer for \"d0724722d3c7b2b41d76ce61b7d954037885bfb47191c4b743450c1e6b6458ef\" returns successfully" Sep 9 00:48:53.137348 kubelet[2738]: I0909 00:48:53.137060 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-774984f779-zprwx" podStartSLOduration=27.445737595 podStartE2EDuration="33.137042669s" podCreationTimestamp="2025-09-09 00:48:20 +0000 UTC" firstStartedPulling="2025-09-09 00:48:47.007099472 +0000 UTC m=+43.575632750" lastFinishedPulling="2025-09-09 00:48:52.698404545 +0000 UTC m=+49.266937824" observedRunningTime="2025-09-09 00:48:53.136746252 +0000 UTC m=+49.705279543" watchObservedRunningTime="2025-09-09 00:48:53.137042669 +0000 UTC m=+49.705575960" Sep 9 00:48:54.104957 kernel: bpftool[5153]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 00:48:54.132780 kubelet[2738]: I0909 00:48:54.132593 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:48:54.405347 systemd-networkd[1441]: vxlan.calico: Link UP Sep 9 00:48:54.405354 systemd-networkd[1441]: vxlan.calico: Gained carrier Sep 9 00:48:55.556409 kubelet[2738]: I0909 00:48:55.556039 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:48:56.408236 systemd-networkd[1441]: vxlan.calico: Gained IPv6LL Sep 9 00:48:57.893530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395768651.mount: Deactivated successfully. Sep 9 00:48:58.430763 containerd[1540]: time="2025-09-09T00:48:58.430708649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:48:58.433622 containerd[1540]: time="2025-09-09T00:48:58.430866962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:58.438094 containerd[1540]: time="2025-09-09T00:48:58.437897022Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:58.440016 containerd[1540]: time="2025-09-09T00:48:58.439974032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:48:58.440669 containerd[1540]: time="2025-09-09T00:48:58.440651647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.741955606s" Sep 9 00:48:58.440700 containerd[1540]: time="2025-09-09T00:48:58.440673824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:48:58.495268 containerd[1540]: time="2025-09-09T00:48:58.495240676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:48:58.658265 containerd[1540]: time="2025-09-09T00:48:58.658167923Z" level=info msg="CreateContainer within sandbox \"cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:48:58.726158 containerd[1540]: time="2025-09-09T00:48:58.726048508Z" level=info msg="CreateContainer within sandbox \"cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"61fc488f5cd82853d4568033d5ea2e6985a29650b375b8bbe4dab7c090a8e5e7\"" Sep 9 00:48:58.760199 containerd[1540]: time="2025-09-09T00:48:58.760176779Z" level=info msg="StartContainer for \"61fc488f5cd82853d4568033d5ea2e6985a29650b375b8bbe4dab7c090a8e5e7\"" Sep 9 00:48:58.933434 systemd[1]: run-containerd-runc-k8s.io-61fc488f5cd82853d4568033d5ea2e6985a29650b375b8bbe4dab7c090a8e5e7-runc.W3FXTS.mount: Deactivated successfully. Sep 9 00:48:58.943123 systemd[1]: Started cri-containerd-61fc488f5cd82853d4568033d5ea2e6985a29650b375b8bbe4dab7c090a8e5e7.scope - libcontainer container 61fc488f5cd82853d4568033d5ea2e6985a29650b375b8bbe4dab7c090a8e5e7. Sep 9 00:48:58.996728 containerd[1540]: time="2025-09-09T00:48:58.996658244Z" level=info msg="StartContainer for \"61fc488f5cd82853d4568033d5ea2e6985a29650b375b8bbe4dab7c090a8e5e7\" returns successfully" Sep 9 00:48:59.610260 kubelet[2738]: I0909 00:48:59.607444 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-bg69g" podStartSLOduration=26.178101527 podStartE2EDuration="37.600728941s" podCreationTimestamp="2025-09-09 00:48:22 +0000 UTC" firstStartedPulling="2025-09-09 00:48:47.072029506 +0000 UTC m=+43.640562784" lastFinishedPulling="2025-09-09 00:48:58.494656918 +0000 UTC m=+55.063190198" observedRunningTime="2025-09-09 00:48:59.598297535 +0000 UTC m=+56.166830819" watchObservedRunningTime="2025-09-09 00:48:59.600728941 +0000 UTC m=+56.169262225" Sep 9 00:49:00.636882 systemd[1]: run-containerd-runc-k8s.io-61fc488f5cd82853d4568033d5ea2e6985a29650b375b8bbe4dab7c090a8e5e7-runc.qXXDE1.mount: Deactivated successfully. Sep 9 00:49:02.501382 containerd[1540]: time="2025-09-09T00:49:02.501323747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:02.504731 containerd[1540]: time="2025-09-09T00:49:02.504409374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:49:02.510361 containerd[1540]: time="2025-09-09T00:49:02.509170216Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:02.512501 containerd[1540]: time="2025-09-09T00:49:02.512155006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:02.513301 containerd[1540]: time="2025-09-09T00:49:02.513276605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.018009043s" Sep 9 00:49:02.513371 containerd[1540]: time="2025-09-09T00:49:02.513305016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:49:02.655020 containerd[1540]: time="2025-09-09T00:49:02.654642074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:49:03.027900 containerd[1540]: time="2025-09-09T00:49:03.027868365Z" level=info msg="CreateContainer within sandbox \"74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:49:03.120525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185234249.mount: Deactivated successfully. Sep 9 00:49:03.165513 containerd[1540]: time="2025-09-09T00:49:03.165449971Z" level=info msg="CreateContainer within sandbox \"74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bbaebdf34ab0fe43bc299a994aee73cfd328b005a15c0fc1c4672cfcb061a278\"" Sep 9 00:49:03.178968 containerd[1540]: time="2025-09-09T00:49:03.165912175Z" level=info msg="StartContainer for \"bbaebdf34ab0fe43bc299a994aee73cfd328b005a15c0fc1c4672cfcb061a278\"" Sep 9 00:49:03.228217 systemd[1]: Started cri-containerd-bbaebdf34ab0fe43bc299a994aee73cfd328b005a15c0fc1c4672cfcb061a278.scope - libcontainer container bbaebdf34ab0fe43bc299a994aee73cfd328b005a15c0fc1c4672cfcb061a278. Sep 9 00:49:03.253125 containerd[1540]: time="2025-09-09T00:49:03.252952073Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:03.255598 containerd[1540]: time="2025-09-09T00:49:03.255553949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:49:03.257314 containerd[1540]: time="2025-09-09T00:49:03.256730722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 602.065478ms" Sep 9 00:49:03.257314 containerd[1540]: time="2025-09-09T00:49:03.256762967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:49:03.264040 containerd[1540]: time="2025-09-09T00:49:03.264015030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:49:03.265483 containerd[1540]: time="2025-09-09T00:49:03.265465721Z" level=info msg="CreateContainer within sandbox \"466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:49:03.285032 containerd[1540]: time="2025-09-09T00:49:03.284942615Z" level=info msg="CreateContainer within sandbox \"466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de93c0fae68a1452f1d467fcbe8c99e593707b8aa7c69b06ff3b47213b71b6e3\"" Sep 9 00:49:03.286446 containerd[1540]: time="2025-09-09T00:49:03.285636626Z" level=info msg="StartContainer for \"de93c0fae68a1452f1d467fcbe8c99e593707b8aa7c69b06ff3b47213b71b6e3\"" Sep 9 00:49:03.295003 containerd[1540]: time="2025-09-09T00:49:03.294704391Z" level=info msg="StartContainer for \"bbaebdf34ab0fe43bc299a994aee73cfd328b005a15c0fc1c4672cfcb061a278\" returns successfully" Sep 9 00:49:03.319396 systemd[1]: Started cri-containerd-de93c0fae68a1452f1d467fcbe8c99e593707b8aa7c69b06ff3b47213b71b6e3.scope - libcontainer container de93c0fae68a1452f1d467fcbe8c99e593707b8aa7c69b06ff3b47213b71b6e3. Sep 9 00:49:03.373854 containerd[1540]: time="2025-09-09T00:49:03.373827782Z" level=info msg="StartContainer for \"de93c0fae68a1452f1d467fcbe8c99e593707b8aa7c69b06ff3b47213b71b6e3\" returns successfully" Sep 9 00:49:03.840493 containerd[1540]: time="2025-09-09T00:49:03.840463838Z" level=info msg="StopPodSandbox for \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\"" Sep 9 00:49:04.360714 kubelet[2738]: I0909 00:49:04.359912 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cc69d998f-tp8b8" podStartSLOduration=30.037290952 podStartE2EDuration="42.311548365s" podCreationTimestamp="2025-09-09 00:48:22 +0000 UTC" firstStartedPulling="2025-09-09 00:48:50.378128178 +0000 UTC m=+46.946661456" lastFinishedPulling="2025-09-09 00:49:02.652385587 +0000 UTC m=+59.220918869" observedRunningTime="2025-09-09 00:49:04.297799162 +0000 UTC m=+60.866332451" watchObservedRunningTime="2025-09-09 00:49:04.311548365 +0000 UTC m=+60.880081649" Sep 9 00:49:05.333967 kubelet[2738]: I0909 00:49:05.333907 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-774984f779-qfzwz" podStartSLOduration=32.478203544 podStartE2EDuration="45.333881543s" podCreationTimestamp="2025-09-09 00:48:20 +0000 UTC" firstStartedPulling="2025-09-09 00:48:50.402095971 +0000 UTC m=+46.970629249" lastFinishedPulling="2025-09-09 00:49:03.257773968 +0000 UTC m=+59.826307248" observedRunningTime="2025-09-09 00:49:04.392528337 +0000 UTC m=+60.961061625" watchObservedRunningTime="2025-09-09 00:49:05.333881543 +0000 UTC m=+61.902414830" Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:04.954 [WARNING][5506] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" WorkloadEndpoint="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:04.959 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:04.959 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" iface="eth0" netns="" Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:04.959 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:04.959 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:05.654 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" HandleID="k8s-pod-network.1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Workload="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:05.685 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:05.691 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:05.712 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" HandleID="k8s-pod-network.1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Workload="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:05.712 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" HandleID="k8s-pod-network.1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Workload="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:05.714 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:05.718900 containerd[1540]: 2025-09-09 00:49:05.716 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:49:05.725077 containerd[1540]: time="2025-09-09T00:49:05.722810734Z" level=info msg="TearDown network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\" successfully" Sep 9 00:49:05.725077 containerd[1540]: time="2025-09-09T00:49:05.722832647Z" level=info msg="StopPodSandbox for \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\" returns successfully" Sep 9 00:49:05.839250 containerd[1540]: time="2025-09-09T00:49:05.839150990Z" level=info msg="RemovePodSandbox for \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\"" Sep 9 00:49:05.863193 containerd[1540]: time="2025-09-09T00:49:05.862981831Z" level=info msg="Forcibly stopping sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\"" Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.937 [WARNING][5542] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" WorkloadEndpoint="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.946 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.946 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" iface="eth0" netns="" Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.946 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.946 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.962 [INFO][5549] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" HandleID="k8s-pod-network.1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Workload="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.962 [INFO][5549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.962 [INFO][5549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.966 [WARNING][5549] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" HandleID="k8s-pod-network.1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Workload="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.967 [INFO][5549] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" HandleID="k8s-pod-network.1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Workload="localhost-k8s-whisker--6d688cd477--dl2q8-eth0" Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.967 [INFO][5549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:05.972178 containerd[1540]: 2025-09-09 00:49:05.970 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459" Sep 9 00:49:05.972178 containerd[1540]: time="2025-09-09T00:49:05.972149777Z" level=info msg="TearDown network for sandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\" successfully" Sep 9 00:49:05.999336 containerd[1540]: time="2025-09-09T00:49:05.999228838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:49:06.009317 containerd[1540]: time="2025-09-09T00:49:06.009201131Z" level=info msg="RemovePodSandbox \"1c08a14e520420d3f351ed681b13ec0a25d99314b4ede22cc07bc7defe4c5459\" returns successfully" Sep 9 00:49:06.025044 containerd[1540]: time="2025-09-09T00:49:06.025017878Z" level=info msg="StopPodSandbox for \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\"" Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.057 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe7ec415-6d72-4459-b5ae-85006f84662b", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124", Pod:"coredns-674b8bbfcf-7lw2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc32bf4379b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.058 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.058 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" iface="eth0" netns="" Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.058 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.058 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.072 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" HandleID="k8s-pod-network.63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.073 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.073 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.077 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" HandleID="k8s-pod-network.63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.077 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" HandleID="k8s-pod-network.63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.078 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:06.081580 containerd[1540]: 2025-09-09 00:49:06.080 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:49:06.083712 containerd[1540]: time="2025-09-09T00:49:06.081603220Z" level=info msg="TearDown network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\" successfully" Sep 9 00:49:06.083712 containerd[1540]: time="2025-09-09T00:49:06.081633689Z" level=info msg="StopPodSandbox for \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\" returns successfully" Sep 9 00:49:06.083712 containerd[1540]: time="2025-09-09T00:49:06.082076001Z" level=info msg="RemovePodSandbox for \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\"" Sep 9 00:49:06.083712 containerd[1540]: time="2025-09-09T00:49:06.082090993Z" level=info msg="Forcibly stopping sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\"" Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.116 [WARNING][5584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe7ec415-6d72-4459-b5ae-85006f84662b", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f59428d69c3316fa89852d44a02ba21cee8ee53ee2feb5ad8c6ec2c8a5c49124", Pod:"coredns-674b8bbfcf-7lw2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc32bf4379b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.116 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.116 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" iface="eth0" netns="" Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.116 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.116 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.131 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" HandleID="k8s-pod-network.63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.131 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.131 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.135 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" HandleID="k8s-pod-network.63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.135 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" HandleID="k8s-pod-network.63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Workload="localhost-k8s-coredns--674b8bbfcf--7lw2w-eth0" Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.136 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:06.138571 containerd[1540]: 2025-09-09 00:49:06.137 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc" Sep 9 00:49:06.141256 containerd[1540]: time="2025-09-09T00:49:06.138592193Z" level=info msg="TearDown network for sandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\" successfully" Sep 9 00:49:06.160528 containerd[1540]: time="2025-09-09T00:49:06.160498744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:49:06.164453 containerd[1540]: time="2025-09-09T00:49:06.164428154Z" level=info msg="RemovePodSandbox \"63a4e9e4f6452272ff4ad9895da994a59d383c354af470e5c8ceb7e80992c5fc\" returns successfully" Sep 9 00:49:06.164899 containerd[1540]: time="2025-09-09T00:49:06.164728360Z" level=info msg="StopPodSandbox for \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\"" Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.189 [WARNING][5605] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--774984f779--zprwx-eth0", GenerateName:"calico-apiserver-774984f779-", Namespace:"calico-apiserver", SelfLink:"", UID:"d663ab63-4876-4bed-b10d-08dca1619cbf", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774984f779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7", Pod:"calico-apiserver-774984f779-zprwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaea86c7f9b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.189 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.189 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" iface="eth0" netns="" Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.189 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.189 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.204 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" HandleID="k8s-pod-network.7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.204 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.204 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.210 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" HandleID="k8s-pod-network.7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.210 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" HandleID="k8s-pod-network.7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.211 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:06.213794 containerd[1540]: 2025-09-09 00:49:06.212 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:49:06.215496 containerd[1540]: time="2025-09-09T00:49:06.214193105Z" level=info msg="TearDown network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\" successfully" Sep 9 00:49:06.215496 containerd[1540]: time="2025-09-09T00:49:06.214214237Z" level=info msg="StopPodSandbox for \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\" returns successfully" Sep 9 00:49:06.215496 containerd[1540]: time="2025-09-09T00:49:06.214641667Z" level=info msg="RemovePodSandbox for \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\"" Sep 9 00:49:06.215496 containerd[1540]: time="2025-09-09T00:49:06.214663219Z" level=info msg="Forcibly stopping sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\"" Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.236 [WARNING][5626] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--774984f779--zprwx-eth0", GenerateName:"calico-apiserver-774984f779-", Namespace:"calico-apiserver", SelfLink:"", UID:"d663ab63-4876-4bed-b10d-08dca1619cbf", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774984f779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a43424cc2f89bf072a9a104c45502d46553acd73c559d9654ba2536e8f67be7", Pod:"calico-apiserver-774984f779-zprwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaea86c7f9b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.237 [INFO][5626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.237 [INFO][5626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" iface="eth0" netns="" Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.237 [INFO][5626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.237 [INFO][5626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.263 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" HandleID="k8s-pod-network.7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.263 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.263 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.267 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" HandleID="k8s-pod-network.7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.267 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" HandleID="k8s-pod-network.7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Workload="localhost-k8s-calico--apiserver--774984f779--zprwx-eth0" Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.268 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:06.271660 containerd[1540]: 2025-09-09 00:49:06.269 [INFO][5626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d" Sep 9 00:49:06.279521 containerd[1540]: time="2025-09-09T00:49:06.271769015Z" level=info msg="TearDown network for sandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\" successfully" Sep 9 00:49:06.315434 containerd[1540]: time="2025-09-09T00:49:06.315088623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:49:06.315434 containerd[1540]: time="2025-09-09T00:49:06.315155188Z" level=info msg="RemovePodSandbox \"7786fd0ac4d34274413fcffc6da1cdaf9310b1b3fded0564f2327d480cbdcc0d\" returns successfully" Sep 9 00:49:06.315831 containerd[1540]: time="2025-09-09T00:49:06.315636467Z" level=info msg="StopPodSandbox for \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\"" Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.352 [WARNING][5647] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0", GenerateName:"calico-apiserver-774984f779-", Namespace:"calico-apiserver", SelfLink:"", UID:"23688cde-a65d-4c6c-9d95-5a6f71851bf6", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774984f779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7", Pod:"calico-apiserver-774984f779-qfzwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b9a42bcfa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.352 [INFO][5647] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.352 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" iface="eth0" netns="" Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.352 [INFO][5647] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.352 [INFO][5647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.373 [INFO][5654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" HandleID="k8s-pod-network.2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.373 [INFO][5654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.373 [INFO][5654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.379 [WARNING][5654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" HandleID="k8s-pod-network.2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.379 [INFO][5654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" HandleID="k8s-pod-network.2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.380 [INFO][5654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:06.382681 containerd[1540]: 2025-09-09 00:49:06.381 [INFO][5647] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:49:06.382681 containerd[1540]: time="2025-09-09T00:49:06.382616107Z" level=info msg="TearDown network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\" successfully" Sep 9 00:49:06.382681 containerd[1540]: time="2025-09-09T00:49:06.382631878Z" level=info msg="StopPodSandbox for \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\" returns successfully" Sep 9 00:49:06.442047 containerd[1540]: time="2025-09-09T00:49:06.441281293Z" level=info msg="RemovePodSandbox for \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\"" Sep 9 00:49:06.442047 containerd[1540]: time="2025-09-09T00:49:06.441307312Z" level=info msg="Forcibly stopping sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\"" Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.485 [WARNING][5668] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0", GenerateName:"calico-apiserver-774984f779-", Namespace:"calico-apiserver", SelfLink:"", UID:"23688cde-a65d-4c6c-9d95-5a6f71851bf6", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"774984f779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"466d6034092ae6542d5d1cc13c7aee61b19a960f3339ac9948b757618f0289d7", Pod:"calico-apiserver-774984f779-qfzwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b9a42bcfa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.486 [INFO][5668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.486 [INFO][5668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" iface="eth0" netns="" Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.486 [INFO][5668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.486 [INFO][5668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.514 [INFO][5676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" HandleID="k8s-pod-network.2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.514 [INFO][5676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.514 [INFO][5676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.520 [WARNING][5676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" HandleID="k8s-pod-network.2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.520 [INFO][5676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" HandleID="k8s-pod-network.2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Workload="localhost-k8s-calico--apiserver--774984f779--qfzwz-eth0" Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.521 [INFO][5676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:06.525286 containerd[1540]: 2025-09-09 00:49:06.523 [INFO][5668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e" Sep 9 00:49:06.525286 containerd[1540]: time="2025-09-09T00:49:06.524737149Z" level=info msg="TearDown network for sandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\" successfully" Sep 9 00:49:06.574705 containerd[1540]: time="2025-09-09T00:49:06.574672622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:49:06.574802 containerd[1540]: time="2025-09-09T00:49:06.574721208Z" level=info msg="RemovePodSandbox \"2bba610848f689644fc7a07746237a577180b4ce175a7d9bd08b75748cdd6e7e\" returns successfully" Sep 9 00:49:06.610292 containerd[1540]: time="2025-09-09T00:49:06.610268585Z" level=info msg="StopPodSandbox for \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\"" Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.651 [WARNING][5694] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6kb7s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158", Pod:"csi-node-driver-6kb7s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85891b56877", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.652 [INFO][5694] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.652 [INFO][5694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" iface="eth0" netns="" Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.652 [INFO][5694] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.652 [INFO][5694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.684 [INFO][5701] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" HandleID="k8s-pod-network.3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.684 [INFO][5701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.684 [INFO][5701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.691 [WARNING][5701] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" HandleID="k8s-pod-network.3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.691 [INFO][5701] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" HandleID="k8s-pod-network.3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.693 [INFO][5701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:06.701595 containerd[1540]: 2025-09-09 00:49:06.699 [INFO][5694] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:49:06.701595 containerd[1540]: time="2025-09-09T00:49:06.701543576Z" level=info msg="TearDown network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\" successfully" Sep 9 00:49:06.701595 containerd[1540]: time="2025-09-09T00:49:06.701559416Z" level=info msg="StopPodSandbox for \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\" returns successfully" Sep 9 00:49:06.717731 containerd[1540]: time="2025-09-09T00:49:06.717476611Z" level=info msg="RemovePodSandbox for \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\"" Sep 9 00:49:06.717731 containerd[1540]: time="2025-09-09T00:49:06.717510145Z" level=info msg="Forcibly stopping sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\"" Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.746 [WARNING][5715] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6kb7s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09b5c400-88c5-4a1e-9b8c-a3d25d8f76d5", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158", Pod:"csi-node-driver-6kb7s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85891b56877", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.746 [INFO][5715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.746 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" iface="eth0" netns="" Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.746 [INFO][5715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.746 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.769 [INFO][5723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" HandleID="k8s-pod-network.3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.769 [INFO][5723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.769 [INFO][5723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.775 [WARNING][5723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" HandleID="k8s-pod-network.3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.775 [INFO][5723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" HandleID="k8s-pod-network.3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Workload="localhost-k8s-csi--node--driver--6kb7s-eth0" Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.775 [INFO][5723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:06.781059 containerd[1540]: 2025-09-09 00:49:06.778 [INFO][5715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e" Sep 9 00:49:06.798326 containerd[1540]: time="2025-09-09T00:49:06.781965403Z" level=info msg="TearDown network for sandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\" successfully" Sep 9 00:49:06.952479 containerd[1540]: time="2025-09-09T00:49:06.952447155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:49:06.952570 containerd[1540]: time="2025-09-09T00:49:06.952499086Z" level=info msg="RemovePodSandbox \"3ad1c6755556c730e23cc30962e6239dce43969344a5a33c52b79ddfbac8614e\" returns successfully" Sep 9 00:49:06.964100 containerd[1540]: time="2025-09-09T00:49:06.952814789Z" level=info msg="StopPodSandbox for \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\"" Sep 9 00:49:06.974722 containerd[1540]: time="2025-09-09T00:49:06.974230564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:06.982917 containerd[1540]: time="2025-09-09T00:49:06.982873913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:49:06.998173 containerd[1540]: time="2025-09-09T00:49:06.998097591Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.977 [WARNING][5740] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0dc5384-6a49-429a-b83f-4f8249484a53", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b", Pod:"coredns-674b8bbfcf-zmwvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cc44314067", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.978 [INFO][5740] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.978 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" iface="eth0" netns="" Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.978 [INFO][5740] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.978 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.993 [INFO][5748] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" HandleID="k8s-pod-network.17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.993 [INFO][5748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.993 [INFO][5748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.998 [WARNING][5748] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" HandleID="k8s-pod-network.17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:06.998 [INFO][5748] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" HandleID="k8s-pod-network.17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:07.000 [INFO][5748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:07.008073 containerd[1540]: 2025-09-09 00:49:07.001 [INFO][5740] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:49:07.008073 containerd[1540]: time="2025-09-09T00:49:07.003103154Z" level=info msg="TearDown network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\" successfully" Sep 9 00:49:07.008073 containerd[1540]: time="2025-09-09T00:49:07.003120714Z" level=info msg="StopPodSandbox for \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\" returns successfully" Sep 9 00:49:07.008073 containerd[1540]: time="2025-09-09T00:49:07.003351584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:07.008073 containerd[1540]: time="2025-09-09T00:49:07.003440313Z" level=info msg="RemovePodSandbox for \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\"" Sep 9 00:49:07.008073 containerd[1540]: time="2025-09-09T00:49:07.003458425Z" level=info msg="Forcibly stopping sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\"" Sep 9 00:49:07.008073 containerd[1540]: time="2025-09-09T00:49:07.004276279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.740237548s" Sep 9 00:49:07.008073 containerd[1540]: time="2025-09-09T00:49:07.004294854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.031 [WARNING][5762] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0dc5384-6a49-429a-b83f-4f8249484a53", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e583f5a2f96e2331e09db52626df9582bca2a484e945f773032523f3c3efc85b", Pod:"coredns-674b8bbfcf-zmwvb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cc44314067", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.031 [INFO][5762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.031 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" iface="eth0" netns="" Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.031 [INFO][5762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.031 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.046 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" HandleID="k8s-pod-network.17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.046 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.046 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.056 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" HandleID="k8s-pod-network.17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.056 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" HandleID="k8s-pod-network.17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Workload="localhost-k8s-coredns--674b8bbfcf--zmwvb-eth0" Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.057 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:07.059711 containerd[1540]: 2025-09-09 00:49:07.058 [INFO][5762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5" Sep 9 00:49:07.059711 containerd[1540]: time="2025-09-09T00:49:07.059696750Z" level=info msg="TearDown network for sandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\" successfully" Sep 9 00:49:07.117114 containerd[1540]: time="2025-09-09T00:49:07.117077651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:49:07.117206 containerd[1540]: time="2025-09-09T00:49:07.117130781Z" level=info msg="RemovePodSandbox \"17403c55fc48c1224d6f08050e7030515b871e62a09da9697b35aafa951ac6f5\" returns successfully" Sep 9 00:49:07.118407 containerd[1540]: time="2025-09-09T00:49:07.118390425Z" level=info msg="StopPodSandbox for \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\"" Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.155 [WARNING][5785] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bg69g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"456193ac-db3d-4857-b7e3-7054cee2a893", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f", Pod:"goldmane-54d579b49d-bg69g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie39c3faf822", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.156 [INFO][5785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.156 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" iface="eth0" netns="" Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.156 [INFO][5785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.156 [INFO][5785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.173 [INFO][5792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" HandleID="k8s-pod-network.23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.173 [INFO][5792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.173 [INFO][5792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.178 [WARNING][5792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" HandleID="k8s-pod-network.23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.179 [INFO][5792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" HandleID="k8s-pod-network.23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.179 [INFO][5792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:07.183583 containerd[1540]: 2025-09-09 00:49:07.180 [INFO][5785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:49:07.183583 containerd[1540]: time="2025-09-09T00:49:07.183166271Z" level=info msg="TearDown network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\" successfully" Sep 9 00:49:07.183583 containerd[1540]: time="2025-09-09T00:49:07.183181475Z" level=info msg="StopPodSandbox for \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\" returns successfully" Sep 9 00:49:07.223895 containerd[1540]: time="2025-09-09T00:49:07.223870059Z" level=info msg="RemovePodSandbox for \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\"" Sep 9 00:49:07.223895 containerd[1540]: time="2025-09-09T00:49:07.223892669Z" level=info msg="Forcibly stopping sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\"" Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.362 [WARNING][5806] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bg69g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"456193ac-db3d-4857-b7e3-7054cee2a893", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cde41b167e7cc227a2fe00695797b60b0b67ea5e000398eb5ecb36e412154b9f", Pod:"goldmane-54d579b49d-bg69g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie39c3faf822", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.362 [INFO][5806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.362 [INFO][5806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" iface="eth0" netns="" Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.362 [INFO][5806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.362 [INFO][5806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.378 [INFO][5814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" HandleID="k8s-pod-network.23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.378 [INFO][5814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.378 [INFO][5814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.382 [WARNING][5814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" HandleID="k8s-pod-network.23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.382 [INFO][5814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" HandleID="k8s-pod-network.23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Workload="localhost-k8s-goldmane--54d579b49d--bg69g-eth0" Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.383 [INFO][5814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:07.385441 containerd[1540]: 2025-09-09 00:49:07.384 [INFO][5806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c" Sep 9 00:49:07.398080 containerd[1540]: time="2025-09-09T00:49:07.385440929Z" level=info msg="TearDown network for sandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\" successfully" Sep 9 00:49:07.422196 containerd[1540]: time="2025-09-09T00:49:07.422108811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:49:07.422196 containerd[1540]: time="2025-09-09T00:49:07.422167047Z" level=info msg="RemovePodSandbox \"23a68c688ef62da8269c90c1dd34358731e218bbaf994d2aeaf8284f5161c06c\" returns successfully" Sep 9 00:49:07.422613 containerd[1540]: time="2025-09-09T00:49:07.422592409Z" level=info msg="StopPodSandbox for \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\"" Sep 9 00:49:07.503476 containerd[1540]: time="2025-09-09T00:49:07.503450254Z" level=info msg="CreateContainer within sandbox \"df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.488 [WARNING][5828] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0", GenerateName:"calico-kube-controllers-7cc69d998f-", Namespace:"calico-system", SelfLink:"", UID:"cfbbe3a7-b775-4416-b316-762d53f81c5d", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cc69d998f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998", Pod:"calico-kube-controllers-7cc69d998f-tp8b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fd57c644ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.488 [INFO][5828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.488 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" iface="eth0" netns="" Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.488 [INFO][5828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.488 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.507 [INFO][5835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" HandleID="k8s-pod-network.cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.507 [INFO][5835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.507 [INFO][5835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.511 [WARNING][5835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" HandleID="k8s-pod-network.cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.511 [INFO][5835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" HandleID="k8s-pod-network.cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.511 [INFO][5835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:07.527536 containerd[1540]: 2025-09-09 00:49:07.514 [INFO][5828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:49:07.527536 containerd[1540]: time="2025-09-09T00:49:07.527455346Z" level=info msg="TearDown network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\" successfully" Sep 9 00:49:07.527536 containerd[1540]: time="2025-09-09T00:49:07.527470742Z" level=info msg="StopPodSandbox for \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\" returns successfully" Sep 9 00:49:07.588017 containerd[1540]: time="2025-09-09T00:49:07.586429366Z" level=info msg="RemovePodSandbox for \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\"" Sep 9 00:49:07.588017 containerd[1540]: time="2025-09-09T00:49:07.586470215Z" level=info msg="Forcibly stopping sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\"" Sep 9 00:49:07.599764 containerd[1540]: time="2025-09-09T00:49:07.599399789Z" level=info msg="CreateContainer within sandbox \"df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8d2be673a2d5b4337f99e1731f6fd546c129c95b31af8313a456835229fa6ec3\"" Sep 9 00:49:07.599937 containerd[1540]: time="2025-09-09T00:49:07.599860305Z" level=info msg="StartContainer for \"8d2be673a2d5b4337f99e1731f6fd546c129c95b31af8313a456835229fa6ec3\"" Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.668 [WARNING][5849] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0", GenerateName:"calico-kube-controllers-7cc69d998f-", Namespace:"calico-system", SelfLink:"", UID:"cfbbe3a7-b775-4416-b316-762d53f81c5d", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cc69d998f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74948bfff2cafed2bd5e1db480aa0efd373431cb05d75ef57f28e0da523c6998", Pod:"calico-kube-controllers-7cc69d998f-tp8b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fd57c644ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.669 [INFO][5849] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.669 [INFO][5849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" iface="eth0" netns="" Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.669 [INFO][5849] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.669 [INFO][5849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.700 [INFO][5865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" HandleID="k8s-pod-network.cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.700 [INFO][5865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.700 [INFO][5865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.706 [WARNING][5865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" HandleID="k8s-pod-network.cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.706 [INFO][5865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" HandleID="k8s-pod-network.cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Workload="localhost-k8s-calico--kube--controllers--7cc69d998f--tp8b8-eth0" Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.709 [INFO][5865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:49:07.712753 containerd[1540]: 2025-09-09 00:49:07.711 [INFO][5849] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d" Sep 9 00:49:07.726620 containerd[1540]: time="2025-09-09T00:49:07.712935678Z" level=info msg="TearDown network for sandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\" successfully" Sep 9 00:49:07.732639 containerd[1540]: time="2025-09-09T00:49:07.732615523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:49:07.732753 containerd[1540]: time="2025-09-09T00:49:07.732741598Z" level=info msg="RemovePodSandbox \"cb9bc047d0d38a0dfe765857e883982dcba3122e0173859ae874ef010357876d\" returns successfully" Sep 9 00:49:07.760861 systemd[1]: run-containerd-runc-k8s.io-8d2be673a2d5b4337f99e1731f6fd546c129c95b31af8313a456835229fa6ec3-runc.JyqtuY.mount: Deactivated successfully. Sep 9 00:49:07.795107 systemd[1]: Started cri-containerd-8d2be673a2d5b4337f99e1731f6fd546c129c95b31af8313a456835229fa6ec3.scope - libcontainer container 8d2be673a2d5b4337f99e1731f6fd546c129c95b31af8313a456835229fa6ec3. Sep 9 00:49:07.868402 containerd[1540]: time="2025-09-09T00:49:07.868371034Z" level=info msg="StartContainer for \"8d2be673a2d5b4337f99e1731f6fd546c129c95b31af8313a456835229fa6ec3\" returns successfully" Sep 9 00:49:07.923159 containerd[1540]: time="2025-09-09T00:49:07.922469522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:49:09.580088 containerd[1540]: time="2025-09-09T00:49:09.579909104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:09.582017 containerd[1540]: time="2025-09-09T00:49:09.581173324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:49:09.599794 containerd[1540]: time="2025-09-09T00:49:09.599770703Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:09.607013 containerd[1540]: time="2025-09-09T00:49:09.606954835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:49:09.607968 containerd[1540]: time="2025-09-09T00:49:09.607379155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.684878854s" Sep 9 00:49:09.607968 containerd[1540]: time="2025-09-09T00:49:09.607404861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:49:09.610654 containerd[1540]: time="2025-09-09T00:49:09.610516806Z" level=info msg="CreateContainer within sandbox \"df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:49:09.627659 containerd[1540]: time="2025-09-09T00:49:09.627632903Z" level=info msg="CreateContainer within sandbox \"df8b76df3ea7f89f26e635ed98336e946d9a52d61529d1b6bbca31a7410a8158\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7e51cc85aaf9adfd83241cd123d343aa522bd566618caac203e53f52893e37e2\"" Sep 9 00:49:09.628942 containerd[1540]: time="2025-09-09T00:49:09.628922601Z" level=info msg="StartContainer for \"7e51cc85aaf9adfd83241cd123d343aa522bd566618caac203e53f52893e37e2\"" Sep 9 00:49:09.809071 systemd[1]: Started cri-containerd-7e51cc85aaf9adfd83241cd123d343aa522bd566618caac203e53f52893e37e2.scope - libcontainer container 7e51cc85aaf9adfd83241cd123d343aa522bd566618caac203e53f52893e37e2. Sep 9 00:49:09.854036 containerd[1540]: time="2025-09-09T00:49:09.853893454Z" level=info msg="StartContainer for \"7e51cc85aaf9adfd83241cd123d343aa522bd566618caac203e53f52893e37e2\" returns successfully" Sep 9 00:49:10.426644 kubelet[2738]: I0909 00:49:10.426588 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6kb7s" podStartSLOduration=29.420877899 podStartE2EDuration="48.419873089s" podCreationTimestamp="2025-09-09 00:48:22 +0000 UTC" firstStartedPulling="2025-09-09 00:48:50.609367928 +0000 UTC m=+47.177901205" lastFinishedPulling="2025-09-09 00:49:09.608363112 +0000 UTC m=+66.176896395" observedRunningTime="2025-09-09 00:49:10.323365976 +0000 UTC m=+66.891899274" watchObservedRunningTime="2025-09-09 00:49:10.419873089 +0000 UTC m=+66.988406379" Sep 9 00:49:11.089641 kubelet[2738]: I0909 00:49:11.088077 2738 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:49:11.090663 kubelet[2738]: I0909 00:49:11.090648 2738 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:49:29.726206 systemd[1]: Started sshd@7-139.178.70.101:22-139.178.89.65:44178.service - OpenSSH per-connection server daemon (139.178.89.65:44178). Sep 9 00:49:29.861508 sshd[5987]: Accepted publickey for core from 139.178.89.65 port 44178 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:49:29.866716 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:49:29.873876 systemd-logind[1516]: New session 10 of user core. Sep 9 00:49:29.879090 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:49:30.491136 sshd[5987]: pam_unix(sshd:session): session closed for user core Sep 9 00:49:30.514985 systemd[1]: sshd@7-139.178.70.101:22-139.178.89.65:44178.service: Deactivated successfully. Sep 9 00:49:30.520547 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:49:30.521799 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:49:30.522560 systemd-logind[1516]: Removed session 10. Sep 9 00:49:34.013025 systemd[1]: run-containerd-runc-k8s.io-bbaebdf34ab0fe43bc299a994aee73cfd328b005a15c0fc1c4672cfcb061a278-runc.Xf03oW.mount: Deactivated successfully. Sep 9 00:49:35.708325 systemd[1]: Started sshd@8-139.178.70.101:22-139.178.89.65:45992.service - OpenSSH per-connection server daemon (139.178.89.65:45992). Sep 9 00:49:36.342447 sshd[6050]: Accepted publickey for core from 139.178.89.65 port 45992 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:49:36.344284 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:49:36.349277 systemd-logind[1516]: New session 11 of user core. Sep 9 00:49:36.357129 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:49:38.156135 sshd[6050]: pam_unix(sshd:session): session closed for user core Sep 9 00:49:38.161246 systemd[1]: sshd@8-139.178.70.101:22-139.178.89.65:45992.service: Deactivated successfully. Sep 9 00:49:38.163228 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:49:38.164647 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:49:38.167454 systemd-logind[1516]: Removed session 11. Sep 9 00:49:43.167311 systemd[1]: Started sshd@9-139.178.70.101:22-139.178.89.65:47014.service - OpenSSH per-connection server daemon (139.178.89.65:47014). Sep 9 00:49:43.436598 sshd[6084]: Accepted publickey for core from 139.178.89.65 port 47014 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:49:43.438425 sshd[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:49:43.444114 systemd-logind[1516]: New session 12 of user core. Sep 9 00:49:43.448100 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:49:46.668542 sshd[6084]: pam_unix(sshd:session): session closed for user core Sep 9 00:49:46.744189 systemd[1]: sshd@9-139.178.70.101:22-139.178.89.65:47014.service: Deactivated successfully. Sep 9 00:49:46.747315 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:49:46.750844 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:49:46.763238 systemd[1]: Started sshd@10-139.178.70.101:22-139.178.89.65:47024.service - OpenSSH per-connection server daemon (139.178.89.65:47024). Sep 9 00:49:46.765211 systemd-logind[1516]: Removed session 12. Sep 9 00:49:47.019347 sshd[6127]: Accepted publickey for core from 139.178.89.65 port 47024 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:49:47.020703 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:49:47.028905 systemd-logind[1516]: New session 13 of user core. Sep 9 00:49:47.034910 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:49:47.731063 sshd[6127]: pam_unix(sshd:session): session closed for user core Sep 9 00:49:47.734771 systemd[1]: Started sshd@11-139.178.70.101:22-139.178.89.65:47038.service - OpenSSH per-connection server daemon (139.178.89.65:47038). Sep 9 00:49:47.749089 systemd[1]: sshd@10-139.178.70.101:22-139.178.89.65:47024.service: Deactivated successfully. Sep 9 00:49:47.750538 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:49:47.752664 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:49:47.753414 systemd-logind[1516]: Removed session 13. Sep 9 00:49:47.886368 sshd[6141]: Accepted publickey for core from 139.178.89.65 port 47038 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:49:47.890089 sshd[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:49:47.897292 systemd-logind[1516]: New session 14 of user core. Sep 9 00:49:47.903241 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:49:48.207534 sshd[6141]: pam_unix(sshd:session): session closed for user core Sep 9 00:49:48.211314 systemd[1]: sshd@11-139.178.70.101:22-139.178.89.65:47038.service: Deactivated successfully. Sep 9 00:49:48.213365 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:49:48.215322 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:49:48.216887 systemd-logind[1516]: Removed session 14. Sep 9 00:49:53.239206 systemd[1]: Started sshd@12-139.178.70.101:22-139.178.89.65:40860.service - OpenSSH per-connection server daemon (139.178.89.65:40860). Sep 9 00:49:53.959892 sshd[6161]: Accepted publickey for core from 139.178.89.65 port 40860 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:49:53.961683 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:49:53.965541 systemd-logind[1516]: New session 15 of user core. Sep 9 00:49:53.970143 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:49:54.818236 sshd[6161]: pam_unix(sshd:session): session closed for user core Sep 9 00:49:54.826092 systemd[1]: sshd@12-139.178.70.101:22-139.178.89.65:40860.service: Deactivated successfully. Sep 9 00:49:54.828456 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:49:54.830482 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:49:54.839213 systemd[1]: Started sshd@13-139.178.70.101:22-139.178.89.65:40866.service - OpenSSH per-connection server daemon (139.178.89.65:40866). Sep 9 00:49:54.841033 systemd-logind[1516]: Removed session 15. Sep 9 00:49:55.141453 sshd[6190]: Accepted publickey for core from 139.178.89.65 port 40866 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:49:55.170964 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:49:55.183714 systemd-logind[1516]: New session 16 of user core. Sep 9 00:49:55.187195 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:49:56.762328 sshd[6190]: pam_unix(sshd:session): session closed for user core Sep 9 00:49:56.808696 systemd[1]: sshd@13-139.178.70.101:22-139.178.89.65:40866.service: Deactivated successfully. Sep 9 00:49:56.810615 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:49:56.811651 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:49:56.826403 systemd[1]: Started sshd@14-139.178.70.101:22-139.178.89.65:40874.service - OpenSSH per-connection server daemon (139.178.89.65:40874). Sep 9 00:49:56.827354 systemd-logind[1516]: Removed session 16. Sep 9 00:49:57.023324 sshd[6205]: Accepted publickey for core from 139.178.89.65 port 40874 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:49:57.026242 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:49:57.034439 systemd-logind[1516]: New session 17 of user core. Sep 9 00:49:57.038701 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:49:59.071161 sshd[6205]: pam_unix(sshd:session): session closed for user core Sep 9 00:49:59.144460 systemd[1]: sshd@14-139.178.70.101:22-139.178.89.65:40874.service: Deactivated successfully. Sep 9 00:49:59.145598 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:49:59.149834 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:49:59.181216 systemd[1]: Started sshd@15-139.178.70.101:22-139.178.89.65:40888.service - OpenSSH per-connection server daemon (139.178.89.65:40888). Sep 9 00:49:59.182538 systemd-logind[1516]: Removed session 17. Sep 9 00:49:59.618587 sshd[6224]: Accepted publickey for core from 139.178.89.65 port 40888 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:49:59.621870 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:49:59.634931 systemd-logind[1516]: New session 18 of user core. Sep 9 00:49:59.642361 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:50:02.964120 sshd[6224]: pam_unix(sshd:session): session closed for user core Sep 9 00:50:03.119088 systemd[1]: sshd@15-139.178.70.101:22-139.178.89.65:40888.service: Deactivated successfully. Sep 9 00:50:03.128059 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:50:03.131157 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:50:03.155265 systemd[1]: Started sshd@16-139.178.70.101:22-139.178.89.65:39960.service - OpenSSH per-connection server daemon (139.178.89.65:39960). Sep 9 00:50:03.156346 systemd-logind[1516]: Removed session 18. Sep 9 00:50:03.581690 sshd[6292]: Accepted publickey for core from 139.178.89.65 port 39960 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:50:03.606885 sshd[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:50:03.665685 systemd-logind[1516]: New session 19 of user core. Sep 9 00:50:03.671076 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:50:06.285828 kubelet[2738]: E0909 00:50:06.237452 2738 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.246s" Sep 9 00:50:07.526950 sshd[6292]: pam_unix(sshd:session): session closed for user core Sep 9 00:50:07.564829 systemd[1]: sshd@16-139.178.70.101:22-139.178.89.65:39960.service: Deactivated successfully. Sep 9 00:50:07.567317 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:50:07.570107 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:50:07.571441 systemd-logind[1516]: Removed session 19. Sep 9 00:50:09.086780 systemd[1]: run-containerd-runc-k8s.io-bbaebdf34ab0fe43bc299a994aee73cfd328b005a15c0fc1c4672cfcb061a278-runc.ZuXAOy.mount: Deactivated successfully. Sep 9 00:50:12.553232 systemd[1]: Started sshd@17-139.178.70.101:22-139.178.89.65:56566.service - OpenSSH per-connection server daemon (139.178.89.65:56566). Sep 9 00:50:12.654595 sshd[6351]: Accepted publickey for core from 139.178.89.65 port 56566 ssh2: RSA SHA256:g3IPgsm34v3PtMfu6LGHmIgZi634//KF4Nu+KJc88kg Sep 9 00:50:12.655394 sshd[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:50:12.663061 systemd-logind[1516]: New session 20 of user core. Sep 9 00:50:12.668549 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:50:13.808339 sshd[6351]: pam_unix(sshd:session): session closed for user core Sep 9 00:50:13.813266 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:50:13.813543 systemd[1]: sshd@17-139.178.70.101:22-139.178.89.65:56566.service: Deactivated successfully. Sep 9 00:50:13.815074 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:50:13.821338 systemd-logind[1516]: Removed session 20.